kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/18] Support SDEI Virtualization
@ 2020-08-17 10:05 Gavin Shan
  2020-08-17 10:05 ` [PATCH 01/18] drivers/firmware/sdei: Retrieve event signaled property on registration Gavin Shan
                   ` (18 more replies)
  0 siblings, 19 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This series intends to support SDEI virtualization. The background is
the feature (Asynchronous Page Fault) needs virtualized SDEI event to
deliver page-not-present notification from host to guest. This series
depends on the series "Refactor SDEI Client Driver", which was posted
previously. Both series can be found from github:

   https://developer.arm.com/documentation/den0054/a/
   https://www.spinics.net/lists/arm-kernel/msg826783.html
   https://github.com/gwshan/linux  ("sdei_client")
   https://github.com/gwshan/linux  ("sdei")

First of all, bits[23:20] of the SDEI event number are reserved to
indicate the SDEI event type:

   0x0: physical SDEI event number, originated from underly firmware
   0x1: virtual SDEI event number, passed from KVM because of physical
        SDEI event. The corresponding SDEI events are also called as
        passthrou SDEI events.
   0x2: KVM private SDEI event number, originated from KVM itself.

The implementation supports passthrou and KVM private SDEI events. The
same SDEI event can be registered and enabled on multiple VMs. So the
registered SDEI event is represented by "struct kvm_sdei_event" and
formed into a linked list globally. "struct kvm_sdei_kvm_event" is
created and inserted into the radix tree in "struct kvm_sdei_event",
which is indexed by @kvm->userspace_pid if the corresponding SDEI event
is registered on the particular KVM. Besides, "struct kvm_sdei_vcpu_event"
is introduced to deliver SDEI event to one particular vCPU. So the data
structs have different scopes, summaried as below:

   struct kvm_sdei_event: global scope
   struct kvm_sdei_kvm_event: VM scope
   struct kvm_sdei_vcpu_event: vCPU sope

For the passthrou SDEI events, the specific handler is registered to the
underly firmware if it's supported. The core functionality of the handler
is to route the incoming SDEI events to the target VM and vCPU. For the
shared SDEI event, it's duplicated to all VMs where the SDEI event was
registered and enabled. The target vCPU is chosen basing on the setting
of routing affinity. For private SDEI event, the event received from the
physical CPU is duplicated and delivered to the vCPUs, which are currently
running or suspending on the physical CPU. For KVM private event, which is
pre-defined and represented by "struct kvm_sdei_priv", API (kvm_sdei_inject())
is always called to deliver the event to the specified vCPU.

The series is organized as below:

PATCH[01-02] Retrieve event signaled property on registration and add API
             (sdei_event_get_info()) to retrieve event's information from
             underly firmware for the passthrou SDEI events.
PATCH[03]    Introduce template for smccc_get_argx().
PATCH[04]    Adds the needed source files, data structs.
PATCH[05-13] Support various hypercalls defined in SDEI specification (v1.0).
PATCH[14]    Implements the SDEI handler to route the incoming passthrou SDEI
             events to target VMs and vCPUs.
PATCH[15-16] Support more hypercalls like COMPLETE, COMPLETE_AND_RESUME, and
             CONTEXT.
PATCH[17]    Support injecting KVM private SDEI event and expose the SDEI
             capability.
PATCH[18]    Add self-test case for KVM private SDEI event

Gavin Shan (18):
  drivers/firmware/sdei: Retrieve event signaled property on
    registration
  drivers/firmware/sdei: Add sdei_event_get_info()
  arm/smccc: Introduce template for inline functions
  arm64/kvm: Add SDEI virtualization infrastructure
  arm64/kvm: Support SDEI_1_0_FN_SDEI_VERSION hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_REGISTER
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_{PRIVATE,SHARED}_RESET hypercall
  arm64/kvm: Implement event handler
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{COMPLETE,
    COMPLETE_AND_RESUME} hypercall
  arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall
  arm64/kvm: Expose SDEI capability
  kvm/selftests: Add SDEI test case

 arch/arm64/include/asm/kvm_emulate.h       |    2 +
 arch/arm64/include/asm/kvm_host.h          |   10 +
 arch/arm64/include/asm/kvm_sdei.h          |  117 ++
 arch/arm64/kvm/Makefile                    |    2 +-
 arch/arm64/kvm/aarch32.c                   |    8 +
 arch/arm64/kvm/arm.c                       |   19 +
 arch/arm64/kvm/hypercalls.c                |   19 +
 arch/arm64/kvm/inject_fault.c              |   30 +
 arch/arm64/kvm/reset.c                     |    3 +
 arch/arm64/kvm/sdei.c                      | 1322 ++++++++++++++++++++
 drivers/firmware/arm_sdei.c                |   38 +
 include/kvm/arm_hypercalls.h               |   34 +-
 include/linux/arm_sdei.h                   |    7 +
 include/uapi/linux/kvm.h                   |    4 +
 tools/testing/selftests/kvm/Makefile       |    1 +
 tools/testing/selftests/kvm/aarch64/sdei.c |  170 +++
 16 files changed, 1766 insertions(+), 20 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_sdei.h
 create mode 100644 arch/arm64/kvm/sdei.c
 create mode 100644 tools/testing/selftests/kvm/aarch64/sdei.c

-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 01/18] drivers/firmware/sdei: Retrieve event signaled property on registration
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 02/18] drivers/firmware/sdei: Add sdei_event_get_info() Gavin Shan
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This retrieves the event signaled property when it's created for the
first time. The property will be needed when SDEI virtualization is
supported.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 drivers/firmware/arm_sdei.c | 6 ++++++
 include/linux/arm_sdei.h    | 1 +
 2 files changed, 7 insertions(+)

diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
index 9c7a6a7c9527..3b34501610f9 100644
--- a/drivers/firmware/arm_sdei.c
+++ b/drivers/firmware/arm_sdei.c
@@ -225,6 +225,12 @@ static struct sdei_internal_event *sdei_event_create(u32 event_num,
 		goto fail;
 	event->type = result;
 
+	err = sdei_api_event_get_info(event_num, SDEI_EVENT_INFO_EV_SIGNALED,
+				      &result);
+	if (err)
+		goto fail;
+	event->signaled = result;
+
 	if (event->type == SDEI_EVENT_TYPE_SHARED) {
 		reg = kzalloc(sizeof(*reg), GFP_KERNEL);
 		if (!reg) {
diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
index 2723a99937f3..447fe4ae8d8b 100644
--- a/include/linux/arm_sdei.h
+++ b/include/linux/arm_sdei.h
@@ -26,6 +26,7 @@ struct sdei_event {
 	u32			event_num;
 	u8			type;
 	u8			priority;
+	u8			signaled;
 };
 
 /*
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 02/18] drivers/firmware/sdei: Add sdei_event_get_info()
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
  2020-08-17 10:05 ` [PATCH 01/18] drivers/firmware/sdei: Retrieve event signaled property on registration Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 03/18] arm/smccc: Introduce template for inline functions Gavin Shan
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This adds API sdei_event_get_info(), to be used when virtualized
SDEI is supported to retrieve the information about the specified
event.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 drivers/firmware/arm_sdei.c | 13 +++++++++++++
 include/linux/arm_sdei.h    |  2 ++
 2 files changed, 15 insertions(+)

diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
index 3b34501610f9..6bc84ab317d3 100644
--- a/drivers/firmware/arm_sdei.c
+++ b/drivers/firmware/arm_sdei.c
@@ -191,6 +191,19 @@ static int sdei_api_event_get_info(u32 event, u32 info, u64 *result)
 			      0, 0, result);
 }
 
+int sdei_event_get_info(u32 event_num, u32 info, u64 *result)
+{
+	int err;
+
+	mutex_lock(&sdei_events_lock);
+
+	err = sdei_api_event_get_info(event_num, info, result);
+
+	mutex_unlock(&sdei_events_lock);
+
+	return err;
+}
+
 static struct sdei_internal_event *sdei_event_create(u32 event_num,
 						     sdei_event_callback *cb,
 						     void *cb_arg)
diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
index 447fe4ae8d8b..28d5d5853314 100644
--- a/include/linux/arm_sdei.h
+++ b/include/linux/arm_sdei.h
@@ -29,6 +29,8 @@ struct sdei_event {
 	u8			signaled;
 };
 
+int sdei_event_get_info(u32 event_num, u32 info, u64 *result);
+
 /*
  * Register your callback to claim an event. The event must be described
  * by firmware.
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 03/18] arm/smccc: Introduce template for inline functions
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
  2020-08-17 10:05 ` [PATCH 01/18] drivers/firmware/sdei: Retrieve event signaled property on registration Gavin Shan
  2020-08-17 10:05 ` [PATCH 02/18] drivers/firmware/sdei: Add sdei_event_get_info() Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 04/18] arm64/kvm: Add SDEI virtualization infrastructure Gavin Shan
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

The inline functions used to get the SMCCC parameters have same
layout. It means it can be covered by a template to make the code
simplified. Also, this adds more similar inline functions like
smccc_get_arg{4,5,6,7,8}() to visit more SMCCC arguments. It's
required to support SDEI virtualization.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 include/kvm/arm_hypercalls.h | 34 +++++++++++++++-------------------
 1 file changed, 15 insertions(+), 19 deletions(-)

diff --git a/include/kvm/arm_hypercalls.h b/include/kvm/arm_hypercalls.h
index 0e2509d27910..1120eff7aa28 100644
--- a/include/kvm/arm_hypercalls.h
+++ b/include/kvm/arm_hypercalls.h
@@ -6,27 +6,21 @@
 
 #include <asm/kvm_emulate.h>
 
-int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);
-
-static inline u32 smccc_get_function(struct kvm_vcpu *vcpu)
-{
-	return vcpu_get_reg(vcpu, 0);
+#define SMCCC_DECLARE_GET_FUNCTION(type, name, reg)		\
+static inline type smccc_get_##name(struct kvm_vcpu *vcpu)	\
+{								\
+	return vcpu_get_reg(vcpu, (reg));			\
 }
 
-static inline unsigned long smccc_get_arg1(struct kvm_vcpu *vcpu)
-{
-	return vcpu_get_reg(vcpu, 1);
-}
-
-static inline unsigned long smccc_get_arg2(struct kvm_vcpu *vcpu)
-{
-	return vcpu_get_reg(vcpu, 2);
-}
-
-static inline unsigned long smccc_get_arg3(struct kvm_vcpu *vcpu)
-{
-	return vcpu_get_reg(vcpu, 3);
-}
+SMCCC_DECLARE_GET_FUNCTION(u32,           function, 0)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg1,     1)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg2,     2)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg3,     3)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg4,     4)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg5,     5)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg6,     6)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg7,     7)
+SMCCC_DECLARE_GET_FUNCTION(unsigned long, arg8,     8)
 
 static inline void smccc_set_retval(struct kvm_vcpu *vcpu,
 				    unsigned long a0,
@@ -40,4 +34,6 @@ static inline void smccc_set_retval(struct kvm_vcpu *vcpu,
 	vcpu_set_reg(vcpu, 3, a3);
 }
 
+int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);
+
 #endif
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 04/18] arm64/kvm: Add SDEI virtualization infrastructure
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (2 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 03/18] arm/smccc: Introduce template for inline functions Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 05/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_VERSION hypercall Gavin Shan
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This adds SDEI virtualization infrastructure by introducing the source
files and data structures. Here are the details about the design:

   * The infrastructure is deisnged to route two types of SDEI events,
     which are originated from underly firmware or KVM module itself.
     Lets name them as passthrou and kvm private event separately. In
     order to identify these two types of events, bits[23:20] of the
     event number is reserved. More details can be found from the
     defined types (KVM_SDEI_EV_NUM_TYPE_{VIRT, PRIV}) in kvm_sdei.h

   * "struct kvm_sdei_event" represents the SDEI event identified by
     the event number. All the events are linked to @kvm_sdei_events
     as a link list, which is protected by lock @kvm_sdei_lock. For
     this event, its backup event could be underly firmware exposed
     physical event (struct sdei_event), or the kvm private event
     (struct kvm_sdei_priv). For the former one, the event is needed
     to be registered/unregistered/enabled/disabled from the underly
     firmware at appropriate time. We needn't do same thing for the
     later one.

   * "struct kvm_sdei_kvm_event" represents the SDEI event that has
     been registered to particular VM. All the events are organized
     as a RB-tree, whose root is "struct kvm_sdei_event". It's indexed
     by @kvm->userspace_pid.

   * "struct kvm_sdei_vcpu_event" represents the event which have been
     pending for the target vCPU. These events forms a link list through
     @vcpu->arch.sdei_events, protected by lock @vcpu->arch.sdei_lock.

For now, errno (SDEI_NOT_SUPPORTED) is returned for all SDEI hypercalls
and they will be implemented and suppoerted one by one in the subsequent
patches.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/include/asm/kvm_host.h |   9 ++
 arch/arm64/include/asm/kvm_sdei.h | 111 +++++++++++++++++++++
 arch/arm64/kvm/Makefile           |   2 +-
 arch/arm64/kvm/arm.c              |   7 ++
 arch/arm64/kvm/hypercalls.c       |  19 ++++
 arch/arm64/kvm/sdei.c             | 156 ++++++++++++++++++++++++++++++
 drivers/firmware/arm_sdei.c       |  19 ++++
 include/linux/arm_sdei.h          |   4 +
 8 files changed, 326 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/include/asm/kvm_sdei.h
 create mode 100644 arch/arm64/kvm/sdei.c

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f81151ad3d3c..2a8cfb3895f7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -336,6 +336,15 @@ struct kvm_vcpu_arch {
 		u64 last_steal;
 		gpa_t base;
 	} steal;
+
+	spinlock_t			sdei_lock;
+	bool				sdei_masked;
+	int				sdei_cpu;
+	struct user_pt_regs		sdei_normal_regs;
+	struct user_pt_regs		sdei_critical_regs;
+	struct kvm_sdei_vcpu_event	*sdei_normal_event;
+	struct kvm_sdei_vcpu_event	*sdei_critical_event;
+	struct list_head		sdei_events;
 };
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
diff --git a/arch/arm64/include/asm/kvm_sdei.h b/arch/arm64/include/asm/kvm_sdei.h
new file mode 100644
index 000000000000..6cbf4015a371
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_sdei.h
@@ -0,0 +1,111 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright Gavin Shan, Redhat Inc 2020.
+ */
+
+#ifndef __ARM64_KVM_SDEI_H__
+#define __ARM64_KVM_SDEI_H__
+
+#include <linux/bitmap.h>
+#include <linux/arm_sdei.h>
+#include <asm/kvm_host.h>
+
+struct kvm_sdei_info {
+	bool		supported;
+	bool		use_hvc;
+	unsigned long	version;
+};
+
+typedef void (*kvm_sdei_notify_func_t)(struct kvm_vcpu *vcpu,
+				       unsigned long num,
+				       unsigned int state);
+
+enum {
+	KVM_SDEI_STATE_DELIVERED,
+	KVM_SDEI_STATE_COMPLETED,
+};
+
+struct kvm_sdei_priv {
+	unsigned long		num;
+	unsigned long		type;
+	unsigned long		signaled;
+	unsigned long		priority;
+	unsigned long		route_mode;
+	unsigned long		route_affinity;
+	kvm_sdei_notify_func_t	notifier;
+};
+
+struct kvm_sdei_event;
+struct kvm_sdei_kvm_event {
+	struct kvm_sdei_event	*event;
+	struct kvm		*kvm;
+	struct rb_node		node;
+	unsigned long		users;
+
+	unsigned long		route_mode;
+	unsigned long		route_affinity;
+	unsigned long		entries[KVM_MAX_VCPUS];
+	unsigned long		params[KVM_MAX_VCPUS];
+	unsigned long		registered[BITS_TO_LONGS(KVM_MAX_VCPUS)];
+	unsigned long		enabled[BITS_TO_LONGS(KVM_MAX_VCPUS)];
+};
+
+struct kvm_sdei_vcpu_event {
+	struct kvm_sdei_kvm_event	*event;
+	unsigned long			users;
+	struct list_head		link;
+};
+
+struct kvm_sdei_event {
+	struct sdei_event	*event;
+	struct kvm_sdei_priv	*priv;
+
+	spinlock_t		lock;
+	struct rb_root		root;
+	unsigned long		count;
+
+	unsigned long		num;
+	struct list_head	link;
+};
+
+/*
+ * According to SDEI specification (v1.0), the event number spans 32-bits
+ * and the lower 24-bits are used as the (real) event number. I don't think
+ * we can use that much SDEI numbers in one system. So we reserve 4-bits
+ * from the 24-bits real event number, to indicate its type here.
+ */
+#define KVM_SDEI_EV_NUM_TYPE_SHIFT     20
+#define KVM_SDEI_EV_NUM_TYPE_MASK      0xF
+#define KVM_SDEI_EV_NUM_TYPE_PHYS      0
+#define KVM_SDEI_EV_NUM_TYPE_VIRT      1
+#define KVM_SDEI_EV_NUM_TYPE_PRIV      2
+
+#define KVM_SDEI_DECLARE_FUNC(name, value)			\
+static inline bool kvm_sdei_num_is_##name(unsigned long num)	\
+{								\
+	return (((num >> KVM_SDEI_EV_NUM_TYPE_SHIFT) &		\
+		KVM_SDEI_EV_NUM_TYPE_MASK) ==			\
+		KVM_SDEI_EV_NUM_TYPE_##value);			\
+}
+
+KVM_SDEI_DECLARE_FUNC(phys, PHYS)
+KVM_SDEI_DECLARE_FUNC(virt, VIRT)
+KVM_SDEI_DECLARE_FUNC(priv, PRIV)
+
+static inline unsigned long kvm_sdei_num_to_std(unsigned long num)
+{
+	return (num & ~(KVM_SDEI_EV_NUM_TYPE_MASK <<
+		KVM_SDEI_EV_NUM_TYPE_SHIFT));
+}
+
+static inline bool kvm_sdei_num_is_valid(unsigned long num)
+{
+	return kvm_sdei_num_is_virt(num) || kvm_sdei_num_is_priv(num);
+}
+
+int kvm_sdei_hypercall(struct kvm_vcpu *vcpu);
+void kvm_sdei_init(void);
+void kvm_sdei_create_vcpu(struct kvm_vcpu *vcpu);
+void kvm_sdei_destroy_vm(struct kvm *kvm);
+
+#endif /* __ARM64_KVM_SDEI_H__ */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 8d3d9513cbfe..5ebd8abd81c8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
 	 inject_fault.o regmap.o va_layout.o hyp.o hyp-init.o handle_exit.o \
 	 guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
-	 aarch32.o arch_timer.o \
+	 aarch32.o arch_timer.o sdei.o \
 	 vgic/vgic.o vgic/vgic-init.o \
 	 vgic/vgic-irqfd.o vgic/vgic-v2.o \
 	 vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 73e12869afe3..bb539b51cd57 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -36,6 +36,7 @@
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_coproc.h>
+#include <asm/kvm_sdei.h>
 #include <asm/sections.h>
 
 #include <kvm/arm_hypercalls.h>
@@ -158,6 +159,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 {
 	int i;
 
+	kvm_sdei_destroy_vm(kvm);
+
 	kvm_vgic_destroy(kvm);
 
 	free_percpu(kvm->arch.last_vcpu_ran);
@@ -285,6 +288,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	if (err)
 		return err;
 
+	kvm_sdei_create_vcpu(vcpu);
+
 	return create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP);
 }
 
@@ -1681,6 +1686,8 @@ int kvm_arch_init(void *opaque)
 	if (err)
 		goto out_hyp;
 
+	kvm_sdei_init();
+
 	if (in_hyp_mode)
 		kvm_info("VHE mode initialized successfully\n");
 	else
diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
index 550dfa3e53cd..1268465efa64 100644
--- a/arch/arm64/kvm/hypercalls.c
+++ b/arch/arm64/kvm/hypercalls.c
@@ -5,6 +5,7 @@
 #include <linux/kvm_host.h>
 
 #include <asm/kvm_emulate.h>
+#include <asm/kvm_sdei.h>
 
 #include <kvm/arm_hypercalls.h>
 #include <kvm/arm_psci.h>
@@ -62,6 +63,24 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		if (gpa != GPA_INVALID)
 			val = gpa;
 		break;
+	case SDEI_1_0_FN_SDEI_VERSION:
+	case SDEI_1_0_FN_SDEI_EVENT_REGISTER:
+	case SDEI_1_0_FN_SDEI_EVENT_ENABLE:
+	case SDEI_1_0_FN_SDEI_EVENT_DISABLE:
+	case SDEI_1_0_FN_SDEI_EVENT_CONTEXT:
+	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE:
+	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME:
+	case SDEI_1_0_FN_SDEI_EVENT_UNREGISTER:
+	case SDEI_1_0_FN_SDEI_EVENT_STATUS:
+	case SDEI_1_0_FN_SDEI_EVENT_GET_INFO:
+	case SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET:
+	case SDEI_1_0_FN_SDEI_PE_MASK:
+	case SDEI_1_0_FN_SDEI_PE_UNMASK:
+	case SDEI_1_0_FN_SDEI_INTERRUPT_BIND:
+	case SDEI_1_0_FN_SDEI_INTERRUPT_RELEASE:
+	case SDEI_1_0_FN_SDEI_PRIVATE_RESET:
+	case SDEI_1_0_FN_SDEI_SHARED_RESET:
+		return kvm_sdei_hypercall(vcpu);
 	default:
 		return kvm_psci_call(vcpu);
 	}
diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
new file mode 100644
index 000000000000..e2090e9bab8b
--- /dev/null
+++ b/arch/arm64/kvm/sdei.c
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright Gavin Shan, Redhat Inc 2020.
+ */
+
+#include <linux/kernel.h>
+#include <linux/rbtree.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <kvm/arm_hypercalls.h>
+#include <asm/kvm_sdei.h>
+
+static struct kvm_sdei_info *kvm_sdei_data;
+static DEFINE_SPINLOCK(kvm_sdei_lock);
+static LIST_HEAD(kvm_sdei_events);
+
+#ifdef CONFIG_ARM_SDE_INTERFACE
+static struct kvm_sdei_info *kvm_sdei_get_kvm_info(void)
+{
+	return sdei_get_kvm_info();
+}
+
+static int kvm_sdei_unregister_event(struct sdei_event *event)
+{
+	return sdei_event_unregister(event);
+}
+#else
+static inline struct kvm_sdei_info *kvm_sdei_get_kvm_info(void)
+{
+	return NULL;
+}
+
+static inline int kvm_sdei_unregister_event(struct sdei_event *event)
+{
+	return -EPERM;
+}
+#endif /* CONFIG_ARM_SDE_INTERFACE */
+
+static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
+{
+	struct kvm_sdei_event *e, *event = NULL;
+	struct kvm_sdei_kvm_event *tmp, *kevent = NULL;
+	unsigned long event_type, event_num;
+	unsigned long ret = SDEI_SUCCESS;
+
+	spin_lock(&kvm_sdei_lock);
+
+	list_for_each_entry_safe(event, e, &kvm_sdei_events, link) {
+		spin_lock(&event->lock);
+
+		/* Check if the event type is the requested one */
+		event_type = event->priv ? event->priv->type :
+					   event->event->type;
+		event_num = event->priv ? event->priv->num :
+					  event->event->event_num;
+		if (!(types & (1 << SDEI_EVENT_TYPE_PRIVATE)) &&
+		     (event_type == SDEI_EVENT_TYPE_PRIVATE)) {
+			spin_unlock(&event->lock);
+			continue;
+		}
+
+		if (!(types & (1 << SDEI_EVENT_TYPE_SHARED)) &&
+		     (event_type == SDEI_EVENT_TYPE_SHARED)) {
+			spin_unlock(&event->lock);
+			continue;
+		}
+
+		/* Remove all unused kvm events */
+		rbtree_postorder_for_each_entry_safe(kevent, tmp,
+						     &event->root, node) {
+			if (kevent->users)
+				continue;
+
+			if (kvm && kevent->kvm != kvm)
+				continue;
+
+			rb_erase(&kevent->node, &event->root);
+			kfree(kevent);
+			event->count--;
+		}
+
+		/*
+		 * Destroy the event if necessary. The passthrou event
+		 * will be unregistered if it's going to be destroyed.
+		 */
+		if (event->count) {
+			spin_unlock(&event->lock);
+			continue;
+		}
+
+		spin_unlock(&event->lock);
+		list_del(&event->link);
+		if (event->event)
+			kvm_sdei_unregister_event(event->event);
+		kfree(event);
+	}
+
+	spin_unlock(&kvm_sdei_lock);
+
+	return ret;
+}
+
+int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
+{
+	u32 function = smccc_get_function(vcpu);
+	unsigned long ret;
+
+	switch (function) {
+	case SDEI_1_0_FN_SDEI_VERSION:
+	case SDEI_1_0_FN_SDEI_EVENT_REGISTER:
+	case SDEI_1_0_FN_SDEI_EVENT_ENABLE:
+	case SDEI_1_0_FN_SDEI_EVENT_DISABLE:
+	case SDEI_1_0_FN_SDEI_EVENT_CONTEXT:
+	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE:
+	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME:
+	case SDEI_1_0_FN_SDEI_EVENT_UNREGISTER:
+	case SDEI_1_0_FN_SDEI_EVENT_STATUS:
+	case SDEI_1_0_FN_SDEI_EVENT_GET_INFO:
+	case SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET:
+	case SDEI_1_0_FN_SDEI_PE_MASK:
+	case SDEI_1_0_FN_SDEI_PE_UNMASK:
+	case SDEI_1_0_FN_SDEI_INTERRUPT_BIND:
+	case SDEI_1_0_FN_SDEI_INTERRUPT_RELEASE:
+	case SDEI_1_0_FN_SDEI_PRIVATE_RESET:
+	case SDEI_1_0_FN_SDEI_SHARED_RESET:
+	default:
+		ret = SDEI_NOT_SUPPORTED;
+	}
+
+	smccc_set_retval(vcpu, ret, 0, 0, 0);
+
+	return 1;
+}
+
+void kvm_sdei_init(void)
+{
+	kvm_sdei_data = kvm_sdei_get_kvm_info();
+}
+
+void kvm_sdei_create_vcpu(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.sdei_masked = true;
+	vcpu->arch.sdei_cpu = INT_MAX;
+	vcpu->arch.sdei_normal_event = NULL;
+	vcpu->arch.sdei_critical_event = NULL;
+	spin_lock_init(&vcpu->arch.sdei_lock);
+	INIT_LIST_HEAD(&vcpu->arch.sdei_events);
+}
+
+void kvm_sdei_destroy_vm(struct kvm *kvm)
+{
+	unsigned int types = ((1 << SDEI_EVENT_TYPE_PRIVATE) |
+			      (1 << SDEI_EVENT_TYPE_SHARED));
+
+	kvm_sdei_reset(kvm, types);
+}
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
index 6bc84ab317d3..e6fc390615ba 100644
--- a/drivers/firmware/arm_sdei.c
+++ b/drivers/firmware/arm_sdei.c
@@ -32,6 +32,9 @@
 #include <linux/smp.h>
 #include <linux/spinlock.h>
 #include <linux/uaccess.h>
+#ifdef CONFIG_KVM
+#include <asm/kvm_sdei.h>
+#endif
 
 /*
  * The call to use to reach the firmware.
@@ -68,6 +71,9 @@ static DEFINE_MUTEX(sdei_events_lock);
 /* and then hold this when modifying the list */
 static DEFINE_SPINLOCK(sdei_list_lock);
 static LIST_HEAD(sdei_list);
+#ifdef CONFIG_KVM
+static struct kvm_sdei_info kvm_sdei_data;
+#endif
 
 /* Private events are registered/enabled via IPI passing one of these */
 struct sdei_crosscall_args {
@@ -1042,6 +1048,12 @@ static int sdei_probe(struct platform_device *pdev)
 		goto remove_reboot;
 	}
 
+#ifdef CONFIG_KVM
+	kvm_sdei_data.supported = true;
+	kvm_sdei_data.use_hvc = (conduit == SMCCC_CONDUIT_HVC);
+	kvm_sdei_data.version = ver;
+#endif
+
 	return 0;
 
 remove_reboot:
@@ -1119,6 +1131,13 @@ static int __init sdei_init(void)
  */
 subsys_initcall_sync(sdei_init);
 
+#ifdef CONFIG_KVM
+struct kvm_sdei_info *sdei_get_kvm_info(void)
+{
+	return &kvm_sdei_data;
+}
+#endif
+
 int sdei_event_handler(struct pt_regs *regs,
 		       struct sdei_registered_event *arg)
 {
diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
index 28d5d5853314..055b298b1f37 100644
--- a/include/linux/arm_sdei.h
+++ b/include/linux/arm_sdei.h
@@ -83,6 +83,10 @@ struct sdei_registered_event {
 	u8			 priority;
 };
 
+#ifdef CONFIG_KVM
+struct kvm_sdei_info *sdei_get_kvm_info(void);
+#endif
+
 /* The arch code entry point should then call this when an event arrives. */
 int notrace sdei_event_handler(struct pt_regs *regs,
 			       struct sdei_registered_event *arg);
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 05/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_VERSION hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (3 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 04/18] arm64/kvm: Add SDEI virtualization infrastructure Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 06/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_REGISTER Gavin Shan
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_VERSION hypercall by returning the
corresponding version. The SDEI version retrieved from the underly
firmware is returned if that's supported. Otherwise, v1.0 is returned
to support the kvm private events, which originates from the kvm module
itself.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index e2090e9bab8b..f5739c0063df 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -36,6 +36,21 @@ static inline int kvm_sdei_unregister_event(struct sdei_event *event)
 }
 #endif /* CONFIG_ARM_SDE_INTERFACE */
 
+static unsigned long kvm_sdei_hypercall_version(struct kvm_vcpu *vcpu)
+{
+	unsigned long ret = SDEI_NOT_SUPPORTED;
+
+	if (kvm_sdei_data && kvm_sdei_data->supported) {
+		ret = kvm_sdei_data->version;
+		goto out;
+	}
+
+	ret = (1UL << SDEI_VERSION_MAJOR_SHIFT);
+
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -107,6 +122,8 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 
 	switch (function) {
 	case SDEI_1_0_FN_SDEI_VERSION:
+		ret = kvm_sdei_hypercall_version(vcpu);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_REGISTER:
 	case SDEI_1_0_FN_SDEI_EVENT_ENABLE:
 	case SDEI_1_0_FN_SDEI_EVENT_DISABLE:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 06/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_REGISTER
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (4 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 05/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_VERSION hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 07/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall Gavin Shan
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_REGISTER hypercall by adding
kvm_sdei_hypercall_register(), which works like below:

   * if both the specific event and kvm event exist and the registered
     status is off, the registered status is turned on in system or vCPU
     scope, depending on the event type (private or shared). Otherwise,
     errno is returned.

   * If the specific event doesn't exist, the event is created and put
     into the linked list (@kvm_sdei_events). Also, the event is registered
     to underly firmware if there is valid one.

   * If the specific kvm event doesn't exist, the kvm event is created
     and put into the RB-tree of the parent event. The registered status
     is updated either.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 230 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 230 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index f5739c0063df..740694d7f0ff 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -13,6 +13,16 @@
 static struct kvm_sdei_info *kvm_sdei_data;
 static DEFINE_SPINLOCK(kvm_sdei_lock);
 static LIST_HEAD(kvm_sdei_events);
+static struct kvm_sdei_priv kvm_sdei_privs[] = {
+	{ 0x40200000,
+	  SDEI_EVENT_TYPE_PRIVATE,
+	  1,
+	  SDEI_EVENT_PRIORITY_CRITICAL,
+	  SDEI_EVENT_REGISTER_RM_ANY,
+	  0,
+	  NULL
+	},
+};
 
 #ifdef CONFIG_ARM_SDE_INTERFACE
 static struct kvm_sdei_info *kvm_sdei_get_kvm_info(void)
@@ -20,6 +30,14 @@ static struct kvm_sdei_info *kvm_sdei_get_kvm_info(void)
 	return sdei_get_kvm_info();
 }
 
+static struct sdei_event *kvm_sdei_register_event(unsigned long event_num,
+						  sdei_event_callback *cb,
+						  void *arg)
+{
+	return sdei_event_register(kvm_sdei_num_to_std(event_num),
+				   cb, arg);
+}
+
 static int kvm_sdei_unregister_event(struct sdei_event *event)
 {
 	return sdei_event_unregister(event);
@@ -30,12 +48,78 @@ static inline struct kvm_sdei_info *kvm_sdei_get_kvm_info(void)
 	return NULL;
 }
 
+static inline struct sdei_event *kvm_sdei_register_event(
+					unsigned long event_num,
+					sdei_event_callback *cb,
+					void *arg)
+{
+	return NULL;
+}
+
 static inline int kvm_sdei_unregister_event(struct sdei_event *event)
 {
 	return -EPERM;
 }
 #endif /* CONFIG_ARM_SDE_INTERFACE */
 
+static struct kvm_sdei_priv *kvm_sdei_find_priv(unsigned long num)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(kvm_sdei_privs); i++) {
+		if (kvm_sdei_privs[i].num == num)
+			return &kvm_sdei_privs[i];
+	}
+
+	return NULL;
+}
+
+static struct kvm_sdei_event *kvm_sdei_find_event(struct kvm *kvm,
+		unsigned long num, struct kvm_sdei_kvm_event **kvm_event,
+		struct rb_node **rb_parent, struct rb_node ***rb_link)
+{
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *tmp, *kevent = NULL;
+	struct rb_node *p, *parent = NULL;
+
+	list_for_each_entry(event, &kvm_sdei_events, link) {
+		if (event->num == num)
+			break;
+	}
+
+	if (!event || event->num != num) {
+		event = NULL;
+		goto out;
+	}
+
+	if (!kvm || !kvm_event)
+		goto out;
+
+	spin_lock(&event->lock);
+	p = event->root.rb_node;
+	while (p) {
+		parent = p;
+		tmp = rb_entry(parent, struct kvm_sdei_kvm_event, node);
+		if (tmp->kvm->userspace_pid > kvm->userspace_pid) {
+			p = parent->rb_left;
+		} else if (tmp->kvm->userspace_pid < kvm->userspace_pid) {
+			p = parent->rb_right;
+		} else {
+			kevent = tmp;
+			break;
+		}
+	}
+
+	spin_unlock(&event->lock);
+	*kvm_event = kevent;
+	if (rb_parent)
+		*rb_parent = parent;
+	if (rb_link)
+		*rb_link = &p;
+out:
+	return event;
+}
+
 static unsigned long kvm_sdei_hypercall_version(struct kvm_vcpu *vcpu)
 {
 	unsigned long ret = SDEI_NOT_SUPPORTED;
@@ -51,6 +135,150 @@ static unsigned long kvm_sdei_hypercall_version(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static int kvm_sdei_handler(u32 num, struct pt_regs *regs, void *arg)
+{
+	return 0;
+}
+
+static unsigned long kvm_sdei_hypercall_register(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_sdei_event *event = NULL, *new = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	struct kvm_sdei_priv *priv = NULL;
+	struct rb_node *rb_parent, **rb_link;
+	unsigned long event_num = smccc_get_arg1(vcpu);
+	unsigned long event_entry = smccc_get_arg2(vcpu);
+	unsigned long event_param = smccc_get_arg3(vcpu);
+	unsigned long route_mode = smccc_get_arg4(vcpu);
+	unsigned long route_affinity = smccc_get_arg5(vcpu);
+	unsigned long event_type;
+	int index = vcpu->vcpu_idx;
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Sanity check */
+	if (!kvm_sdei_num_is_valid(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(kvm_sdei_data && kvm_sdei_data->supported) &&
+	    kvm_sdei_num_is_virt(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(route_mode == SDEI_EVENT_REGISTER_RM_ANY ||
+	      route_mode == SDEI_EVENT_REGISTER_RM_PE)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	/*
+	 * Find the event. We just need to set the registered
+	 * bit if it already exists.
+	 */
+	spin_lock(&kvm_sdei_lock);
+
+	event = kvm_sdei_find_event(kvm, event_num, &kevent,
+				    &rb_parent, &rb_link);
+	if (kevent) {
+		event_type = event->priv ? event->priv->type :
+					   event->event->type;
+		index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ?
+			vcpu->vcpu_idx : 0;
+
+		spin_lock(&event->lock);
+		if (test_bit(index, kevent->registered)) {
+			spin_unlock(&event->lock);
+			ret = SDEI_DENIED;
+			goto unlock;
+		}
+
+		kevent->route_mode = route_mode;
+		kevent->route_affinity = route_affinity;
+		kevent->entries[index] = event_entry;
+		kevent->params[index] = event_param;
+		set_bit(index, kevent->registered);
+		spin_unlock(&event->lock);
+
+		ret = SDEI_SUCCESS;
+		goto unlock;
+	}
+
+	/*
+	 * Create the event. The event is going to be registered
+	 * if it's a passthrou event.
+	 */
+	if (!event) {
+		if (kvm_sdei_num_is_priv(event_num)) {
+			priv = kvm_sdei_find_priv(event_num);
+			if (!priv) {
+				ret = SDEI_INVALID_PARAMETERS;
+				goto unlock;
+			}
+		}
+
+		event = kzalloc(sizeof(*event), GFP_KERNEL);
+		if (!event) {
+			ret = SDEI_OUT_OF_RESOURCE;
+			goto unlock;
+		}
+
+		if (!priv) {
+			event->event = kvm_sdei_register_event(event_num,
+						kvm_sdei_handler, event);
+			if (!event->event) {
+				kfree(event);
+				ret = SDEI_OUT_OF_RESOURCE;
+				goto unlock;
+			}
+		}
+
+		new = event;
+		spin_lock_init(&event->lock);
+		event->priv = priv;
+		event->root = RB_ROOT;
+		event->count = 0;
+		event->num = event_num;
+		list_add_tail(&event->link, &kvm_sdei_events);
+
+		new = event;
+		rb_parent = NULL;
+		rb_link = &event->root.rb_node;
+	}
+
+	/* Create KVM event */
+	kevent = kzalloc(sizeof(*kevent), GFP_KERNEL);
+	if (!kevent) {
+		kfree(new);
+		ret = SDEI_OUT_OF_RESOURCE;
+		goto unlock;
+	}
+
+	event_type = priv ? priv->type : event->event->type;
+	index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	kevent->event = event;
+	kevent->kvm = kvm;
+	kevent->users = 0;
+	kevent->route_mode = route_mode;
+	kevent->route_affinity = route_affinity;
+	kevent->entries[index] = event_entry;
+	kevent->params[index] = event_param;
+	set_bit(index, kevent->registered);
+
+	spin_lock(&event->lock);
+	rb_link_node(&kevent->node, rb_parent, rb_link);
+	rb_insert_color(&kevent->node, &event->root);
+	event->count++;
+	spin_unlock(&event->lock);
+
+unlock:
+	spin_unlock(&kvm_sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -125,6 +353,8 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_version(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_REGISTER:
+		ret = kvm_sdei_hypercall_register(vcpu);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_ENABLE:
 	case SDEI_1_0_FN_SDEI_EVENT_DISABLE:
 	case SDEI_1_0_FN_SDEI_EVENT_CONTEXT:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 07/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (5 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 06/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_REGISTER Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 08/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall Gavin Shan
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall by
implementing function sdei_kvm_hypercall_enable(). On success, the
event is enabled globally or on the local CPU. Otherwise, errno is
returned. For the passthrou event, it's not enabled or disabled from
the underly firmware. What we need is to update the enabled bits,
which will be serving as filters during the event delivery to the
target VMs and vCPUs.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 68 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 740694d7f0ff..320b79528211 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -279,6 +279,70 @@ static unsigned long kvm_sdei_hypercall_register(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_enable(struct kvm_vcpu *vcpu,
+					       bool enabled)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	unsigned long event_num = smccc_get_arg1(vcpu);
+	unsigned long event_type;
+	int index = 0;
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Validate event number */
+	if (!kvm_sdei_num_is_valid(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(kvm_sdei_data && kvm_sdei_data->supported) &&
+	    kvm_sdei_num_is_virt(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	/* Find the event */
+	spin_lock(&kvm_sdei_lock);
+	event = kvm_sdei_find_event(kvm, event_num, &kevent, NULL, NULL);
+	if (!kevent) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto unlock;
+	}
+
+	/* Sanity check */
+	spin_lock(&event->lock);
+	event_type = event->priv ? event->priv->type : event->event->type;
+	index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	if (kevent->users) {
+		ret = SDEI_PENDING;
+		goto unlock_event;
+	}
+
+	if (!test_bit(index, kevent->registered)) {
+		ret = SDEI_DENIED;
+		goto unlock_event;
+	}
+
+	if (enabled == test_bit(index, kevent->enabled)) {
+		ret = SDEI_DENIED;
+		goto unlock_event;
+	}
+
+	/* Update status */
+	if (enabled)
+		set_bit(index, kevent->enabled);
+	else
+		clear_bit(index, kevent->enabled);
+
+unlock_event:
+	spin_unlock(&event->lock);
+unlock:
+	spin_unlock(&kvm_sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -356,7 +420,11 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_register(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_ENABLE:
+		ret = kvm_sdei_hypercall_enable(vcpu, true);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_DISABLE:
+		ret = kvm_sdei_hypercall_enable(vcpu, false);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_CONTEXT:
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE:
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 08/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (6 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 07/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 09/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall Gavin Shan
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall by adding
function kvm_sdei_hypercall_unregister(). For event owned by KVM
itself, the register status is updated accordingly and the event is
released if the event isn't needed. For passthrou event, the event
is also unregistered from the underly firmware in that case.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 73 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 320b79528211..63d621dc9711 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -343,6 +343,77 @@ static unsigned long kvm_sdei_hypercall_enable(struct kvm_vcpu *vcpu,
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_unregister(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	unsigned long event_num = smccc_get_arg1(vcpu);
+	unsigned long event_type;
+	int index = 0;
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Validate event number */
+	if (!kvm_sdei_num_is_valid(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(kvm_sdei_data && kvm_sdei_data->supported) &&
+	    kvm_sdei_num_is_virt(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	/* Find the event */
+	spin_lock(&kvm_sdei_lock);
+	event = kvm_sdei_find_event(kvm, event_num, &kevent, NULL, NULL);
+	if (!kevent) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto unlock;
+	}
+
+	/* Sanity check */
+	spin_lock(&event->lock);
+	if (kevent->users) {
+		ret = SDEI_PENDING;
+		goto unlock_event;
+	}
+
+	event_type = event->priv ? event->priv->type : event->event->type;
+	index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	if (!test_bit(index, kevent->registered)) {
+		ret = SDEI_DENIED;
+		goto unlock_event;
+	}
+
+	/* Update status and destroy it if needed */
+	clear_bit(index, kevent->registered);
+	clear_bit(index, kevent->enabled);
+	if (!bitmap_empty(kevent->registered, KVM_MAX_VCPUS))
+		goto unlock_event;
+
+	rb_erase(&kevent->node, &event->root);
+	kfree(kevent);
+	event->count--;
+	if (event->count)
+		goto unlock_event;
+
+	spin_unlock(&event->lock);
+	list_del(&event->link);
+	if (event->event)
+		kvm_sdei_unregister_event(event->event);
+	kfree(event);
+	goto unlock;
+
+unlock_event:
+	spin_unlock(&event->lock);
+unlock:
+	spin_unlock(&kvm_sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -429,6 +500,8 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE:
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME:
 	case SDEI_1_0_FN_SDEI_EVENT_UNREGISTER:
+		ret = kvm_sdei_hypercall_unregister(vcpu);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_STATUS:
 	case SDEI_1_0_FN_SDEI_EVENT_GET_INFO:
 	case SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 09/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (7 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 08/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 10/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall Gavin Shan
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall by adding the
function kvm_sdei_hypercall_status(). On success, the event's current
status is returned. Otherwise, errno is returned.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 50 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 63d621dc9711..2d2135a5c3ea 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -414,6 +414,54 @@ static unsigned long kvm_sdei_hypercall_unregister(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_status(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	unsigned long event_num = smccc_get_arg1(vcpu);
+	unsigned long event_type;
+	int index = 0;
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Validate event number */
+	if (!kvm_sdei_num_is_valid(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(kvm_sdei_data && kvm_sdei_data->supported) &&
+	    kvm_sdei_num_is_virt(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	/* Find the event */
+	spin_lock(&kvm_sdei_lock);
+	event = kvm_sdei_find_event(kvm, event_num, &kevent, NULL, NULL);
+	if (!kevent) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto unlock;
+	}
+
+	spin_lock(&event->lock);
+
+	event_type = event->priv ? event->priv->type : event->event->type;
+	index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	if (test_bit(index, kevent->registered))
+		ret |= (1UL << SDEI_EVENT_STATUS_REGISTERED);
+	if (test_bit(index, kevent->enabled))
+		ret |= (1UL << SDEI_EVENT_STATUS_ENABLED);
+	if (kevent->users)
+		ret |= (1UL << SDEI_EVENT_STATUS_RUNNING);
+
+	spin_unlock(&event->lock);
+unlock:
+	spin_unlock(&kvm_sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -503,6 +551,8 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_unregister(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_STATUS:
+		ret = kvm_sdei_hypercall_status(vcpu);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_GET_INFO:
 	case SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET:
 	case SDEI_1_0_FN_SDEI_PE_MASK:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 10/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (8 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 09/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 11/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall Gavin Shan
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall by adding
kvm_sdei_hypercall_info(). On success, the requested information
about the event is returned. Otherwise, the errno is returned.

The required information is retrieved from the SDEI event if it
has been created. Otherwise, it's retrieved from underly firmware
if applicable.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 125 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 125 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 2d2135a5c3ea..529505c4f0cf 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -38,6 +38,14 @@ static struct sdei_event *kvm_sdei_register_event(unsigned long event_num,
 				   cb, arg);
 }
 
+static int kvm_sdei_get_event_info(unsigned long event_num,
+				   unsigned int info,
+				   unsigned long *result)
+{
+	return sdei_event_get_info(kvm_sdei_num_to_std(event_num),
+				   info, (u64 *)result);
+}
+
 static int kvm_sdei_unregister_event(struct sdei_event *event)
 {
 	return sdei_event_unregister(event);
@@ -56,6 +64,14 @@ static inline struct sdei_event *kvm_sdei_register_event(
 	return NULL;
 }
 
+static inline int kvm_sdei_get_event_info(unsigned long event_num,
+					  unsigned int info,
+					  unsigned long *result)
+{
+	*result = SDEI_NOT_SUPPORTED;
+	return -EPERM;
+}
+
 static inline int kvm_sdei_unregister_event(struct sdei_event *event)
 {
 	return -EPERM;
@@ -462,6 +478,113 @@ static unsigned long kvm_sdei_hypercall_status(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_info(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	struct kvm_sdei_priv *priv = NULL;
+	unsigned long event_num = smccc_get_arg1(vcpu);
+	unsigned long event_info = smccc_get_arg2(vcpu);
+	unsigned long event_type;
+	int index;
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Validate event number */
+	if (!kvm_sdei_num_is_valid(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(kvm_sdei_data && kvm_sdei_data->supported) &&
+	    kvm_sdei_num_is_virt(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	/*
+	 * The requested information could be retrieved from the
+	 * registered event, KVM private descriptor or underly
+	 * firmware.
+	 */
+	spin_lock(&kvm_sdei_lock);
+	event = kvm_sdei_find_event(kvm, event_num, &kevent, NULL, NULL);
+	if (kevent) {
+		spin_lock(&event->lock);
+
+		event_type = event->priv ? event->priv->type :
+					   event->event->type;
+		index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ?
+			vcpu->vcpu_idx : 0;
+		if (!test_bit(index, kevent->registered)) {
+			ret = SDEI_INVALID_PARAMETERS;
+			goto unlock;
+		}
+
+		priv = event->priv;
+	} else if (kvm_sdei_num_is_priv(event_num)) {
+		priv = kvm_sdei_find_priv(event_num);
+		if (!priv) {
+			ret = SDEI_INVALID_PARAMETERS;
+			goto unlock;
+		}
+	} else if (kvm_sdei_num_is_virt(event_num)) {
+		if (event_info == SDEI_EVENT_INFO_EV_ROUTING_MODE ||
+		    event_info == SDEI_EVENT_INFO_EV_ROUTING_AFF) {
+			kvm_sdei_get_event_info(event_num,
+						SDEI_EVENT_INFO_EV_TYPE,
+						&event_type);
+			if (event_type != SDEI_EVENT_TYPE_SHARED) {
+				ret = SDEI_INVALID_PARAMETERS;
+				goto unlock;
+			}
+		}
+
+		kvm_sdei_get_event_info(event_num, event_info, &ret);
+		goto unlock;
+	}
+
+	switch (event_info) {
+	case SDEI_EVENT_INFO_EV_TYPE:
+		ret = priv ? priv->type : event->event->type;
+		break;
+	case SDEI_EVENT_INFO_EV_SIGNALED:
+		ret = priv ? priv->signaled : event->event->signaled;
+		break;
+	case SDEI_EVENT_INFO_EV_PRIORITY:
+		ret = priv ? priv->priority : event->event->priority;
+		break;
+	case SDEI_EVENT_INFO_EV_ROUTING_MODE:
+		event_type = priv ? priv->type : event->event->type;
+		if (event_type != SDEI_EVENT_TYPE_SHARED) {
+			ret = SDEI_INVALID_PARAMETERS;
+			break;
+		}
+
+		ret = kevent ? kevent->route_mode : priv->route_mode;
+		break;
+	case SDEI_EVENT_INFO_EV_ROUTING_AFF:
+		event_type = priv ? priv->type : event->event->type;
+		if (event_type != SDEI_EVENT_TYPE_SHARED) {
+			ret = SDEI_INVALID_PARAMETERS;
+			break;
+		}
+
+		ret = kevent ? kevent->route_affinity : priv->route_affinity;
+		break;
+	default:
+		ret = SDEI_INVALID_PARAMETERS;
+	}
+
+unlock:
+	if (kevent)
+		spin_unlock(&event->lock);
+
+	spin_unlock(&kvm_sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -554,6 +677,8 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_status(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_GET_INFO:
+		ret = kvm_sdei_hypercall_info(vcpu);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET:
 	case SDEI_1_0_FN_SDEI_PE_MASK:
 	case SDEI_1_0_FN_SDEI_PE_UNMASK:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 11/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (9 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 10/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 12/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall Gavin Shan
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall by adding
kvm_sdei_hypercall_route(). On success, the specified routing mode
and affinity is set. Otherwise, errno is returned. The route mode
or affinity is updated to the KVM SDEI event.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 74 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 529505c4f0cf..1e7291acea0d 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -585,6 +585,78 @@ static unsigned long kvm_sdei_hypercall_info(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_route(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	unsigned long event_num = smccc_get_arg1(vcpu);
+	unsigned long route_mode = smccc_get_arg2(vcpu);
+	unsigned long route_affinity = smccc_get_arg3(vcpu);
+	unsigned long event_type;
+	int index = 0;
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Validate the parameters */
+	if (!kvm_sdei_num_is_valid(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(kvm_sdei_data && kvm_sdei_data->supported) &&
+	    kvm_sdei_num_is_virt(event_num)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	if (!(route_mode == SDEI_EVENT_REGISTER_RM_ANY ||
+	      route_mode == SDEI_EVENT_REGISTER_RM_PE)) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	/* Find the event */
+	spin_lock(&kvm_sdei_lock);
+	event = kvm_sdei_find_event(kvm, event_num, &kevent, NULL, NULL);
+	if (!kevent) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto unlock;
+	}
+
+	/* Sanity check */
+	spin_lock(&event->lock);
+	event_type = event->priv ? event->priv->type : event->event->type;
+	index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	if (event_type != SDEI_EVENT_TYPE_SHARED) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto unlock_event;
+	}
+
+	if (!test_bit(index, kevent->registered) ||
+	    test_bit(index, kevent->enabled)     ||
+	    kevent->users) {
+		ret = SDEI_DENIED;
+		goto unlock_event;
+	}
+
+	if (route_mode == kevent->route_mode &&
+	    route_affinity == kevent->route_affinity) {
+		ret = SDEI_DENIED;
+		goto unlock_event;
+	}
+
+	/* Update status */
+	kevent->route_mode = route_mode;
+	kevent->route_affinity = route_affinity;
+
+unlock_event:
+	spin_unlock(&event->lock);
+unlock:
+	spin_unlock(&kvm_sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -680,6 +752,8 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_info(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET:
+		ret = kvm_sdei_hypercall_route(vcpu);
+		break;
 	case SDEI_1_0_FN_SDEI_PE_MASK:
 	case SDEI_1_0_FN_SDEI_PE_UNMASK:
 	case SDEI_1_0_FN_SDEI_INTERRUPT_BIND:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 12/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (10 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 11/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 13/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_{PRIVATE, SHARED}_RESET hypercall Gavin Shan
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall by adding
kvm_sdei_hypercall_mask(). The status is updated accordingly so that
the event targeting the masked CPU will be dropped. However, the status
is never synchronized to underly firmware for the passthrou event
because the event might be shared by multiple VMs.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 1e7291acea0d..0816136e73a6 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -657,6 +657,26 @@ static unsigned long kvm_sdei_hypercall_route(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_mask(struct kvm_vcpu *vcpu,
+					     bool is_mask)
+{
+	unsigned long ret = SDEI_SUCCESS;
+
+	/* Sanity check */
+	spin_lock(&vcpu->arch.sdei_lock);
+	if (is_mask == vcpu->arch.sdei_masked) {
+		ret = SDEI_DENIED;
+		goto unlock;
+	}
+
+	/* Update the status */
+	vcpu->arch.sdei_masked = is_mask ? true : false;
+
+unlock:
+	spin_unlock(&vcpu->arch.sdei_lock);
+	return ret;
+}
+
 static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 {
 	struct kvm_sdei_event *e, *event = NULL;
@@ -755,7 +775,11 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_route(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_PE_MASK:
+		ret = kvm_sdei_hypercall_mask(vcpu, true);
+		break;
 	case SDEI_1_0_FN_SDEI_PE_UNMASK:
+		ret = kvm_sdei_hypercall_mask(vcpu, false);
+		break;
 	case SDEI_1_0_FN_SDEI_INTERRUPT_BIND:
 	case SDEI_1_0_FN_SDEI_INTERRUPT_RELEASE:
 	case SDEI_1_0_FN_SDEI_PRIVATE_RESET:
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 13/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_{PRIVATE, SHARED}_RESET hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (11 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 12/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 14/18] arm64/kvm: Implement event handler Gavin Shan
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_{PRIVATE,SHARED}_RESET hypercall by
adding kvm_sdei_hypercall_reset(). We can't direct the hypercall to
underly firmware for passthrou event because it might be shared by
multiple VMs. So the trick is to simulate the behaviour to unregister
all the (private or shared) events which have been registered on the
speicific VM. However, the request is directed to underly firmware
by calling sdei_event_unregister() when last event (reference) exits.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 0816136e73a6..2d5e44bb5497 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -741,6 +741,18 @@ static unsigned long kvm_sdei_reset(struct kvm *kvm, unsigned int types)
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_reset(struct kvm_vcpu *vcpu,
+					      bool is_private)
+{
+	struct kvm *kvm = vcpu->kvm;
+	unsigned int types = is_private ? (1 << SDEI_EVENT_TYPE_PRIVATE) :
+					  (1 << SDEI_EVENT_TYPE_SHARED);
+
+	kvm_sdei_reset(kvm, types);
+
+	return SDEI_SUCCESS;
+}
+
 int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 {
 	u32 function = smccc_get_function(vcpu);
@@ -782,8 +794,14 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		break;
 	case SDEI_1_0_FN_SDEI_INTERRUPT_BIND:
 	case SDEI_1_0_FN_SDEI_INTERRUPT_RELEASE:
+		ret = SDEI_NOT_SUPPORTED;
+		break;
 	case SDEI_1_0_FN_SDEI_PRIVATE_RESET:
+		ret = kvm_sdei_hypercall_reset(vcpu, true);
+		break;
 	case SDEI_1_0_FN_SDEI_SHARED_RESET:
+		ret = kvm_sdei_hypercall_reset(vcpu, false);
+		break;
 	default:
 		ret = SDEI_NOT_SUPPORTED;
 	}
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 14/18] arm64/kvm: Implement event handler
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (12 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 13/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_{PRIVATE, SHARED}_RESET hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 15/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{COMPLETE, COMPLETE_AND_RESUME} hypercall Gavin Shan
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This implements the event handler with help of KVM SDEI vCPU event,
which is represented by "struct kvm_sdei_vcpu_event". The shared
event is delivered to all VMs where it was registered and enabled.
The private event is delivered to the vCPUs, which are running or
suspending on current CPU.

KVM_REQ_SDEI request is fired to the vCPU if it receives new event
no matter what type it is. With that, kvm_sdei_deliver() is called
when the vCPU is loaded, to inject the SDEI event to the guest. The
behaviour is defined in SDEI specification (v1.0):

   * x0 to x17 are saved.
   * the interrupted PC/PState are saved.
   * x0/x1/x2/x3 is set to the event number, event parameter, the
     interrupt PC and PState separately.
   * PSTATE is modified as follows: DAIF=0b1111, EL=ELc, nRW=0, SP=1
   * PC is set to the address of the handler

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/kvm_sdei.h |   2 +
 arch/arm64/kvm/arm.c              |   4 +
 arch/arm64/kvm/sdei.c             | 240 +++++++++++++++++++++++++++++-
 4 files changed, 246 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2a8cfb3895f7..ba8cdc304b81 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -45,6 +45,7 @@
 #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
 #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
 #define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
+#define KVM_REQ_SDEI		KVM_ARCH_REQ(5)
 
 #define KVM_DIRTY_LOG_MANUAL_CAPS   (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
 				     KVM_DIRTY_LOG_INITIALLY_SET)
diff --git a/arch/arm64/include/asm/kvm_sdei.h b/arch/arm64/include/asm/kvm_sdei.h
index 6cbf4015a371..70e613941577 100644
--- a/arch/arm64/include/asm/kvm_sdei.h
+++ b/arch/arm64/include/asm/kvm_sdei.h
@@ -104,8 +104,10 @@ static inline bool kvm_sdei_num_is_valid(unsigned long num)
 }
 
 int kvm_sdei_hypercall(struct kvm_vcpu *vcpu);
+void kvm_sdei_deliver(struct kvm_vcpu *vcpu);
 void kvm_sdei_init(void);
 void kvm_sdei_create_vcpu(struct kvm_vcpu *vcpu);
+void kvm_sdei_vcpu_load(struct kvm_vcpu *vcpu);
 void kvm_sdei_destroy_vm(struct kvm *kvm);
 
 #endif /* __ARM64_KVM_SDEI_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index bb539b51cd57..a79a4343bac6 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -356,6 +356,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 	vcpu->cpu = cpu;
 
+	kvm_sdei_vcpu_load(vcpu);
 	kvm_vgic_load(vcpu);
 	kvm_timer_vcpu_load(vcpu);
 	kvm_vcpu_load_sysregs(vcpu);
@@ -623,6 +624,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 		if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu))
 			kvm_reset_vcpu(vcpu);
 
+		if (kvm_check_request(KVM_REQ_SDEI, vcpu))
+			kvm_sdei_deliver(vcpu);
+
 		/*
 		 * Clear IRQ_PENDING requests that were made to guarantee
 		 * that a VCPU sees new virtual interrupts.
diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 2d5e44bb5497..52d0f0809a37 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -151,11 +151,242 @@ static unsigned long kvm_sdei_hypercall_version(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static int kvm_sdei_handler(u32 num, struct pt_regs *regs, void *arg)
+void kvm_sdei_deliver(struct kvm_vcpu *vcpu)
+{
+	struct kvm_sdei_event *event;
+	struct kvm_sdei_kvm_event *kevent;
+	struct kvm_sdei_vcpu_event *tmp, *vevent = NULL;
+	struct user_pt_regs *regs;
+	unsigned long num, type, priority, pstate;
+	bool handle_critical;
+	int index, i;
+
+	spin_lock(&vcpu->arch.sdei_lock);
+
+	/* No way to preempt critical event */
+	if (vcpu->arch.sdei_critical_event)
+		goto unlock;
+
+	/* Find the suitable event to deliver */
+	handle_critical = vcpu->arch.sdei_normal_event ? true : false;
+	list_for_each_entry(tmp, &vcpu->arch.sdei_events, link) {
+		event = tmp->event->event;
+		priority = event->priv ? event->priv->priority :
+					 event->event->priority;
+		if (!handle_critical ||
+		    (priority == SDEI_EVENT_PRIORITY_CRITICAL)) {
+			vevent = tmp;
+			kevent = vevent->event;
+			break;
+		}
+	}
+
+	if (!vevent)
+		goto unlock;
+
+	/* Save registers: x0 -> x17, PC, PState */
+	if (priority == SDEI_EVENT_PRIORITY_CRITICAL) {
+		vcpu->arch.sdei_critical_event = vevent;
+		regs = &vcpu->arch.sdei_critical_regs;
+	} else {
+		vcpu->arch.sdei_normal_event = vevent;
+		regs = &vcpu->arch.sdei_normal_regs;
+	}
+
+	for (i = 0; i < 18; i++)
+		regs->regs[i] = vcpu_get_reg(vcpu, i);
+
+	regs->pc = *vcpu_pc(vcpu);
+	regs->pstate = *vcpu_cpsr(vcpu);
+
+	/* Inject SDEI event: x0 -> x3, PC, PState */
+	num = event->priv ? event->priv->num : event->event->event_num;
+	type = event->priv ? event->priv->type : event->event->type;
+	index = (type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	for (i = 0; i < 18; i++)
+		vcpu_set_reg(vcpu, i, 0);
+
+	vcpu_set_reg(vcpu, 0, num);
+	vcpu_set_reg(vcpu, 1, kevent->params[index]);
+	vcpu_set_reg(vcpu, 2, regs->pc);
+	vcpu_set_reg(vcpu, 3, regs->pstate);
+
+	pstate = regs->pstate;
+	pstate |= (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT);
+	pstate &= ~PSR_MODE_MASK;
+	pstate |= PSR_MODE_EL1h;
+	pstate &= ~PSR_MODE32_BIT;
+
+	vcpu_write_spsr(vcpu, regs->pstate);
+	*vcpu_cpsr(vcpu) = pstate;
+	*vcpu_pc(vcpu) = kevent->entries[index];
+
+	/* Notifier */
+	if (event->priv && event->priv->notifier)
+		event->priv->notifier(vcpu, num, KVM_SDEI_STATE_DELIVERED);
+
+unlock:
+	spin_unlock(&vcpu->arch.sdei_lock);
+}
+
+static int kvm_sdei_queue_event(struct kvm_vcpu *vcpu,
+				struct kvm_sdei_kvm_event *kevent)
+{
+	struct kvm_sdei_vcpu_event *e, *vevent = NULL;
+
+	lockdep_assert_held(&kevent->event->lock);
+	lockdep_assert_held(&vcpu->arch.sdei_lock);
+
+	list_for_each_entry(e, &vcpu->arch.sdei_events, link) {
+		if (e->event == kevent) {
+			vevent = e;
+			break;
+		}
+	}
+
+	/*
+	 * We just need increase the count if the vCPU event has been
+	 * existing. Otherwise, we have to create a new one.
+	 */
+	if (vevent) {
+		vevent->users++;
+		kevent->users++;
+		kvm_make_request(KVM_REQ_SDEI, vcpu);
+		return 0;
+	}
+
+	vevent = kzalloc(sizeof(*vevent), GFP_ATOMIC);
+	if (!vevent) {
+		pr_warn("%s: Unable to alloc memory (%lu, %u-%d)\n",
+			__func__, kevent->event->num,
+			kevent->kvm->userspace_pid, vcpu->vcpu_idx);
+		return -ENOMEM;
+	}
+
+	vevent->event = kevent;
+	vevent->users = 1;
+	kevent->users++;
+	list_add_tail(&vevent->link, &vcpu->arch.sdei_events);
+	kvm_make_request(KVM_REQ_SDEI, vcpu);
+
+	return 0;
+}
+
+/*
+ * Queue the shared event to the target VMs where the event have been
+ * registered and enabled. For the particular VM, the event is delivered
+ * to the first unmasked vCPU if the strict routing isn't specified.
+ * Otherwise, the event is delivered to the specified vCPU.
+ *
+ * If the vCPU event exists, we just need increase its count. Otherwise,
+ * a new one is created and queued to the target vCPU.
+ */
+static int kvm_sdei_shared_handler(struct kvm_sdei_event *event)
+{
+	struct kvm_sdei_kvm_event *kevent, *n;
+	struct kvm_vcpu *target, *vcpu;
+	unsigned long affinity;
+	int i;
+
+	spin_lock(&event->lock);
+
+	rbtree_postorder_for_each_entry_safe(kevent, n,
+					     &event->root, node) {
+		if (!test_bit(0, kevent->registered) ||
+		    !test_bit(0, kevent->enabled))
+			continue;
+
+		/*
+		 * Select the target vCPU according to the routing
+		 * mode and affinity.
+		 */
+		target = NULL;
+		kvm_for_each_vcpu(i, vcpu, kevent->kvm) {
+			affinity = kvm_vcpu_get_mpidr_aff(vcpu);
+			spin_lock(&vcpu->arch.sdei_lock);
+
+			if (kevent->route_mode == SDEI_EVENT_REGISTER_RM_ANY) {
+				if (!vcpu->arch.sdei_masked) {
+					target = vcpu;
+					spin_unlock(&vcpu->arch.sdei_lock);
+					break;
+				}
+			} else if (kevent->route_affinity == affinity) {
+				target = !vcpu->arch.sdei_masked ? vcpu : NULL;
+				spin_unlock(&vcpu->arch.sdei_lock);
+				break;
+			}
+
+			spin_unlock(&vcpu->arch.sdei_lock);
+		}
+
+		if (!target)
+			continue;
+
+		spin_lock(&target->arch.sdei_lock);
+		kvm_sdei_queue_event(target, kevent);
+		spin_unlock(&target->arch.sdei_lock);
+	}
+
+	spin_unlock(&event->lock);
+
+	return 0;
+}
+
+/*
+ * The private SDEI event is delivered into the vCPUs, which are
+ * running or suspending on the current CPU.
+ */
+static int kvm_sdei_private_handler(struct kvm_sdei_event *event)
 {
+	struct kvm_sdei_kvm_event *kevent, *n;
+	struct kvm_vcpu *vcpu;
+	int i;
+
+	spin_lock(&event->lock);
+
+	rbtree_postorder_for_each_entry_safe(kevent, n,
+					     &event->root, node) {
+		if (bitmap_empty(kevent->registered, KVM_MAX_VCPUS) ||
+		    bitmap_empty(kevent->enabled, KVM_MAX_VCPUS))
+			continue;
+
+		kvm_for_each_vcpu(i, vcpu, kevent->kvm) {
+			if (!test_bit(vcpu->vcpu_idx, kevent->registered) ||
+			    !test_bit(vcpu->vcpu_idx, kevent->enabled))
+				continue;
+
+			spin_lock(&vcpu->arch.sdei_lock);
+
+			if (vcpu->arch.sdei_masked ||
+			    vcpu->arch.sdei_cpu != smp_processor_id()) {
+				spin_unlock(&vcpu->arch.sdei_lock);
+				continue;
+			}
+
+			kvm_sdei_queue_event(vcpu, kevent);
+
+			spin_unlock(&vcpu->arch.sdei_lock);
+		}
+	}
+
+	spin_unlock(&event->lock);
+
 	return 0;
 }
 
+static int kvm_sdei_handler(u32 num, struct pt_regs *regs, void *arg)
+{
+	struct kvm_sdei_event *event = (struct kvm_sdei_event *)arg;
+	unsigned long type = (event->priv) ? event->priv->type :
+					     event->event->type;
+
+	if (type == SDEI_EVENT_TYPE_SHARED)
+		kvm_sdei_shared_handler(event);
+
+	return kvm_sdei_private_handler(event);
+}
+
 static unsigned long kvm_sdei_hypercall_register(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -826,6 +1057,13 @@ void kvm_sdei_create_vcpu(struct kvm_vcpu *vcpu)
 	INIT_LIST_HEAD(&vcpu->arch.sdei_events);
 }
 
+void kvm_sdei_vcpu_load(struct kvm_vcpu *vcpu)
+{
+	spin_lock(&vcpu->arch.sdei_lock);
+	vcpu->arch.sdei_cpu = smp_processor_id();
+	spin_unlock(&vcpu->arch.sdei_lock);
+}
+
 void kvm_sdei_destroy_vm(struct kvm *kvm)
 {
 	unsigned int types = ((1 << SDEI_EVENT_TYPE_PRIVATE) |
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 15/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{COMPLETE, COMPLETE_AND_RESUME} hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (13 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 14/18] arm64/kvm: Implement event handler Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 16/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall Gavin Shan
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_{COMPLETE, COMPLETE_AND_RESUME}
hypercall by implementing kvm_sdei_hypercall_complete(). If there is
valid context, the registers are restored as below. Otherwise, errno
is returned.

   * x0 -> x17
   * PC and pState

If it's KVM private event, which is originated from KVM itself, the
notfier is executed. Besides, the IRQ exception is injected if the
request is to resume the guest by SDEI_1_0_FN_SDEI_EVENT_RESUME.
The behaviour is defined in SDEI specification (v1.0).

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/include/asm/kvm_emulate.h |  2 +
 arch/arm64/kvm/aarch32.c             |  8 +++
 arch/arm64/kvm/inject_fault.c        | 30 ++++++++++
 arch/arm64/kvm/sdei.c                | 88 +++++++++++++++++++++++++++-
 4 files changed, 127 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 4d0f8ea600ba..bb7aee5927a5 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -29,10 +29,12 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu);
 void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr);
 
 void kvm_inject_undefined(struct kvm_vcpu *vcpu);
+void kvm_inject_irq(struct kvm_vcpu *vcpu);
 void kvm_inject_vabt(struct kvm_vcpu *vcpu);
 void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
 void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 void kvm_inject_undef32(struct kvm_vcpu *vcpu);
+void kvm_inject_irq32(struct kvm_vcpu *vcpu);
 void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr);
 void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);
 
diff --git a/arch/arm64/kvm/aarch32.c b/arch/arm64/kvm/aarch32.c
index 40a62a99fbf8..73e9059cf2e8 100644
--- a/arch/arm64/kvm/aarch32.c
+++ b/arch/arm64/kvm/aarch32.c
@@ -181,6 +181,14 @@ void kvm_inject_undef32(struct kvm_vcpu *vcpu)
 	post_fault_synchronize(vcpu, loaded);
 }
 
+void kvm_inject_irq32(struct kvm_vcpu *vcpu)
+{
+	bool loaded = pre_fault_synchronize(vcpu);
+
+	prepare_fault32(vcpu, PSR_AA32_MODE_IRQ, 4);
+	post_fault_synchronize(vcpu, loaded);
+}
+
 /*
  * Modelled after TakeDataAbortException() and TakePrefetchAbortException
  * pseudocode.
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index e21fdd93027a..84e50b002cd0 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -168,6 +168,22 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
 }
 
+static void inject_irq64(struct kvm_vcpu *vcpu)
+{
+	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
+
+	enter_exception64(vcpu, PSR_MODE_EL1h, except_type_irq);
+
+	/*
+	 * Build an unknown exception, depending on the instruction
+	 * set.
+	 */
+	if (kvm_vcpu_trap_il_is32bit(vcpu))
+		esr |= ESR_ELx_IL;
+
+	vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+}
+
 /**
  * kvm_inject_dabt - inject a data abort into the guest
  * @vcpu: The VCPU to receive the data abort
@@ -214,6 +230,20 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu)
 		inject_undef64(vcpu);
 }
 
+/**
+ * kvm_inject_irq - inject an IRQ into the guest
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_irq(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_el1_is_32bit(vcpu))
+		kvm_inject_irq32(vcpu);
+	else
+		inject_irq64(vcpu);
+}
+
 void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 esr)
 {
 	vcpu_set_vsesr(vcpu, esr & ESR_ELx_ISS_MASK);
diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 52d0f0809a37..cf6908e87edd 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -590,6 +590,77 @@ static unsigned long kvm_sdei_hypercall_enable(struct kvm_vcpu *vcpu,
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_complete(struct kvm_vcpu *vcpu,
+						 bool resume)
+{
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	struct kvm_sdei_vcpu_event *vevent = NULL;
+	struct user_pt_regs *regs;
+	int i;
+
+	spin_lock(&vcpu->arch.sdei_lock);
+
+	if (!vcpu->arch.sdei_critical_event &&
+	    !vcpu->arch.sdei_normal_event) {
+		spin_unlock(&vcpu->arch.sdei_lock);
+		return SDEI_DENIED;
+	}
+
+	if (vcpu->arch.sdei_critical_event) {
+		vevent = vcpu->arch.sdei_critical_event;
+		regs = &vcpu->arch.sdei_critical_regs;
+		vcpu->arch.sdei_critical_event = NULL;
+	} else if (vcpu->arch.sdei_normal_event) {
+		vevent = vcpu->arch.sdei_normal_event;
+		regs = &vcpu->arch.sdei_normal_regs;
+		vcpu->arch.sdei_normal_event = NULL;
+	}
+
+	/* Restore registers: x0 -> x17, PC, PState */
+	for (i = 0; i < 18; i++)
+		vcpu_set_reg(vcpu, i, regs->regs[i]);
+
+	*vcpu_cpsr(vcpu) = regs->pstate;
+	*vcpu_pc(vcpu) = regs->pc;
+
+	/* Notifier for KVM private event */
+	kevent = vevent->event;
+	event = kevent->event;
+	if (event->priv && event->priv->notifier) {
+		event->priv->notifier(vcpu, event->priv->num,
+				      KVM_SDEI_STATE_COMPLETED);
+	}
+
+	/* Inject interrupt if needed */
+	if (resume)
+		kvm_inject_irq(vcpu);
+
+	/* Release vCPU event if needed */
+	vevent->users--;
+	if (!vevent->users) {
+		list_del(&vevent->link);
+		kfree(vevent);
+	}
+
+	/* Queue request if pending events exist */
+	if (!list_empty(&vcpu->arch.sdei_events))
+		kvm_make_request(KVM_REQ_SDEI, vcpu);
+
+	spin_unlock(&vcpu->arch.sdei_lock);
+
+	/*
+	 * Update status to KVM event. We can't do this with the
+	 * vCPU lock hold. Otherwise, we might run into nested
+	 * locking issue.
+	 */
+	spin_lock(&event->lock);
+	kevent->users--;
+	spin_unlock(&event->lock);
+
+	return SDEI_SUCCESS;
+}
+
 static unsigned long kvm_sdei_hypercall_unregister(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -988,6 +1059,7 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 {
 	u32 function = smccc_get_function(vcpu);
 	unsigned long ret;
+	bool has_result = true;
 
 	switch (function) {
 	case SDEI_1_0_FN_SDEI_VERSION:
@@ -1003,8 +1075,16 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_enable(vcpu, false);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_CONTEXT:
+		ret = SDEI_NOT_SUPPORTED;
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE:
+		has_result = false;
+		ret = kvm_sdei_hypercall_complete(vcpu, false);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME:
+		has_result = false;
+		ret = kvm_sdei_hypercall_complete(vcpu, true);
+		break;
 	case SDEI_1_0_FN_SDEI_EVENT_UNREGISTER:
 		ret = kvm_sdei_hypercall_unregister(vcpu);
 		break;
@@ -1037,7 +1117,13 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = SDEI_NOT_SUPPORTED;
 	}
 
-	smccc_set_retval(vcpu, ret, 0, 0, 0);
+	/*
+	 * For the COMPLETE or COMPLETE_AND_RESUME hypercalls,
+	 * we don't have return value. Otherwise, the restored
+	 * context is corrupted.
+	 */
+	if (has_result)
+		smccc_set_retval(vcpu, ret, 0, 0, 0);
 
 	return 1;
 }
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 16/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (14 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 15/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{COMPLETE, COMPLETE_AND_RESUME} hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 17/18] arm64/kvm: Expose SDEI capability Gavin Shan
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This supports SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall by implementing
kvm_sdei_hypercall_context(). If there is valid context on the current
vCPU and the register index isn't out of scope, the register values are
returned. Otherwise, errno is returned.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/sdei.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index cf6908e87edd..0c5a16e8cbac 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -590,6 +590,35 @@ static unsigned long kvm_sdei_hypercall_enable(struct kvm_vcpu *vcpu,
 	return ret;
 }
 
+static unsigned long kvm_sdei_hypercall_context(struct kvm_vcpu *vcpu)
+{
+	struct user_pt_regs *regs;
+	unsigned long index = smccc_get_arg1(vcpu);
+	unsigned long ret = SDEI_SUCCESS;
+
+	if (index > 17) {
+		ret = SDEI_INVALID_PARAMETERS;
+		goto out;
+	}
+
+	spin_lock(&vcpu->arch.sdei_lock);
+
+	if (vcpu->arch.sdei_critical_event) {
+		regs = &vcpu->arch.sdei_critical_regs;
+	} else if (vcpu->arch.sdei_normal_event) {
+		regs = &vcpu->arch.sdei_normal_regs;
+	} else {
+		ret = SDEI_DENIED;
+		goto unlock;
+	}
+
+	ret = regs->regs[index];
+unlock:
+	spin_unlock(&vcpu->arch.sdei_lock);
+out:
+	return ret;
+}
+
 static unsigned long kvm_sdei_hypercall_complete(struct kvm_vcpu *vcpu,
 						 bool resume)
 {
@@ -1075,7 +1104,7 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu)
 		ret = kvm_sdei_hypercall_enable(vcpu, false);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_CONTEXT:
-		ret = SDEI_NOT_SUPPORTED;
+		ret = kvm_sdei_hypercall_context(vcpu);
 		break;
 	case SDEI_1_0_FN_SDEI_EVENT_COMPLETE:
 		has_result = false;
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 17/18] arm64/kvm: Expose SDEI capability
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (15 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 16/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 10:05 ` [PATCH 18/18] kvm/selftests: Add SDEI test case Gavin Shan
  2020-08-17 11:01 ` [PATCH 00/18] Support SDEI Virtualization Gavin Shan
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This exposes SDEI capability, which is identified by KVM_CAP_ARM_SDEI.
Also, the ioctl interface (KVM_ARM_SDEI_INJECT) is introduced to allow
injecting KVM originated event from user space.

Besides, this implements the following APIs to register the notifier
and cancel the pending SDEI event: kvm_sdei_register_notifier() and
kvm_sdei_cancel().

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/include/asm/kvm_sdei.h |   4 +
 arch/arm64/kvm/arm.c              |   8 ++
 arch/arm64/kvm/reset.c            |   3 +
 arch/arm64/kvm/sdei.c             | 134 ++++++++++++++++++++++++++++++
 include/uapi/linux/kvm.h          |   4 +
 5 files changed, 153 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_sdei.h b/arch/arm64/include/asm/kvm_sdei.h
index 70e613941577..67a5a398fe10 100644
--- a/arch/arm64/include/asm/kvm_sdei.h
+++ b/arch/arm64/include/asm/kvm_sdei.h
@@ -105,6 +105,10 @@ static inline bool kvm_sdei_num_is_valid(unsigned long num)
 
 int kvm_sdei_hypercall(struct kvm_vcpu *vcpu);
 void kvm_sdei_deliver(struct kvm_vcpu *vcpu);
+int kvm_sdei_register_notifier(unsigned long num,
+			       kvm_sdei_notify_func_t func);
+int kvm_sdei_inject(struct kvm_vcpu *vcpu, unsigned long num, bool force);
+int kvm_sdei_cancel(struct kvm_vcpu *vcpu, unsigned long num);
 void kvm_sdei_init(void);
 void kvm_sdei_create_vcpu(struct kvm_vcpu *vcpu);
 void kvm_sdei_vcpu_load(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a79a4343bac6..4bec6c9ece18 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1201,6 +1201,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 
 		return kvm_arm_vcpu_finalize(vcpu, what);
 	}
+	case KVM_ARM_SDEI_INJECT: {
+		unsigned long num;
+
+		if (copy_from_user(&num, argp, sizeof(num)))
+			return -EFAULT;
+
+		return kvm_sdei_inject(vcpu, num, true);
+	}
 	default:
 		r = -EINVAL;
 	}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 6ed36be51b4b..f292bed61147 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -83,6 +83,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		r = has_vhe() && system_supports_address_auth() &&
 				 system_supports_generic_auth();
 		break;
+	case KVM_CAP_ARM_SDEI:
+		r = 1;
+		break;
 	default:
 		r = 0;
 	}
diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c
index 0c5a16e8cbac..2c05edcb3fbb 100644
--- a/arch/arm64/kvm/sdei.c
+++ b/arch/arm64/kvm/sdei.c
@@ -229,6 +229,18 @@ void kvm_sdei_deliver(struct kvm_vcpu *vcpu)
 	spin_unlock(&vcpu->arch.sdei_lock);
 }
 
+int kvm_sdei_register_notifier(unsigned long num, kvm_sdei_notify_func_t func)
+{
+	struct kvm_sdei_priv *priv = kvm_sdei_find_priv(num);
+
+	if (!priv)
+		return -ENOENT;
+
+	priv->notifier = func;
+
+	return 0;
+}
+
 static int kvm_sdei_queue_event(struct kvm_vcpu *vcpu,
 				struct kvm_sdei_kvm_event *kevent)
 {
@@ -272,6 +284,128 @@ static int kvm_sdei_queue_event(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+int kvm_sdei_inject(struct kvm_vcpu *vcpu, unsigned long num, bool force)
+{
+	struct kvm_sdei_event *event = NULL;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	unsigned long event_type, event_priority;
+	int index, ret = 0;
+
+	/* Find the event */
+	spin_lock(&kvm_sdei_lock);
+	event = kvm_sdei_find_event(vcpu->kvm, num, &kevent, NULL, NULL);
+	if (!kevent || !event->priv) {
+		ret = -ENOENT;
+		goto unlock;
+	}
+
+	/*
+	 * We're unable to inject passthrou event, which means the
+	 * event should have been associated with KVM private event
+	 * or descriptor.
+	 */
+	spin_lock(&event->lock);
+	if (!event->priv) {
+		ret = -EINVAL;
+		goto unlock_event;
+	}
+
+	/*
+	 * The event should have been registered and enabled on the
+	 * target vCPU.
+	 */
+	event_type = event->priv->type;
+	event_priority = event->priv->priority;
+	index = (event_type == SDEI_EVENT_TYPE_PRIVATE) ? vcpu->vcpu_idx : 0;
+	if (!test_bit(index, kevent->registered) ||
+	    !test_bit(index, kevent->enabled)) {
+		ret = -EPERM;
+		goto unlock_event;
+	}
+
+	/*
+	 * We need deliver the event immediately when @force is
+	 * false. For this case, we need check if there is space
+	 * to do so.
+	 */
+	spin_lock(&vcpu->arch.sdei_lock);
+	if (!force) {
+		if (vcpu->arch.sdei_critical_event) {
+			ret = -ENOSPC;
+			goto unlock_vcpu;
+		}
+
+		if (vcpu->arch.sdei_normal_event &&
+		    event_type != SDEI_EVENT_PRIORITY_CRITICAL) {
+			ret = -ENOSPC;
+			goto unlock_vcpu;
+		}
+	}
+
+	ret = kvm_sdei_queue_event(vcpu, kevent);
+
+unlock_vcpu:
+	spin_unlock(&vcpu->arch.sdei_lock);
+unlock_event:
+	spin_unlock(&event->lock);
+unlock:
+	spin_unlock(&kvm_sdei_lock);
+	return ret;
+}
+
+int kvm_sdei_cancel(struct kvm_vcpu *vcpu, unsigned long num)
+{
+	struct kvm_sdei_event *event;
+	struct kvm_sdei_kvm_event *kevent = NULL;
+	struct kvm_sdei_vcpu_event *e, *vevent = NULL;
+	unsigned long event_num;
+	int ret = 0;
+
+	spin_lock(&vcpu->arch.sdei_lock);
+
+	list_for_each_entry(e, &vcpu->arch.sdei_events, link) {
+		event = e->event->event;
+		event_num = event->priv ? event->priv->num :
+					  event->event->event_num;
+		if (event_num == num) {
+			vevent = e;
+			break;
+		}
+	}
+
+	if (!vevent) {
+		ret = -ENOENT;
+		goto unlock;
+	}
+
+	/* The event can't be cancelled if it has been delivered */
+	if (vevent->users == 1 &&
+	    (vevent == vcpu->arch.sdei_critical_event ||
+	     vevent == vcpu->arch.sdei_normal_event)) {
+		ret = -EINPROGRESS;
+		goto unlock;
+	}
+
+	/* Release the vCPU event if necessary */
+	kevent = vevent->event;
+	vevent->users--;
+	if (!vevent->users) {
+		list_del(&vevent->link);
+		kfree(vevent);
+	}
+
+unlock:
+	spin_unlock(&vcpu->arch.sdei_lock);
+
+	if (kevent) {
+		spin_lock(&event->lock);
+		kevent->users--;
+		spin_unlock(&event->lock);
+	}
+
+	return ret;
+}
+
 /*
  * Queue the shared event to the target VMs where the event have been
  * registered and enabled. For the particular VM, the event is delivered
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f6d86033c4fa..c9731fad8bf5 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1035,6 +1035,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_LAST_CPU 184
 #define KVM_CAP_SMALLER_MAXPHYADDR 185
 #define KVM_CAP_S390_DIAG318 186
+#define KVM_CAP_ARM_SDEI 187
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1536,6 +1537,9 @@ struct kvm_pv_cmd {
 /* Available with KVM_CAP_S390_PROTECTED */
 #define KVM_S390_PV_COMMAND		_IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
 
+/* Available with KVM_CAP_ARM_SDEI */
+#define KVM_ARM_SDEI_INJECT		_IOW(KVMIO, 0xc6, __u64)
+
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
 	/* Guest initialization commands */
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 18/18] kvm/selftests: Add SDEI test case
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (16 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 17/18] arm64/kvm: Expose SDEI capability Gavin Shan
@ 2020-08-17 10:05 ` Gavin Shan
  2020-08-17 11:01 ` [PATCH 00/18] Support SDEI Virtualization Gavin Shan
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 10:05 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

This adds SDEI test case into selftests where the various hypercalls
are issued to kvm private event (0x40200000) and then ensure that's
completed without error. Note that two vCPUs are started up by default
to run same consequence. Actually, it's simulating what SDEI client
driver does and the following hypercalls are issued in sequence:

   SDEI_1_0_FN_SDEI_VERSION            (probing SDEI capability)
   SDEI_1_0_FN_SDEI_PE_UNMASK          (CPU online)
   SDEI_1_0_FN_SDEI_PRIVATE_RESET      (restart SDEI)
   SDEI_1_0_FN_SDEI_SHARED_RESET
   SDEI_1_0_FN_SDEI_EVENT_GET_INFO     (register event)
   SDEI_1_0_FN_SDEI_EVENT_GET_INFO
   SDEI_1_0_FN_SDEI_EVENT_GET_INFO
   SDEI_1_0_FN_SDEI_EVENT_REGISTER
   SDEI_1_0_FN_SDEI_EVENT_ENABLE       (enable event)
   SDEI_1_0_FN_SDEI_EVENT_DISABLE      (disable event)
   SDEI_1_0_FN_SDEI_EVENT_UNREGISTER   (unregister event)
   SDEI_1_0_FN_SDEI_PE_MASK            (CPU offline)

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 tools/testing/selftests/kvm/Makefile       |   1 +
 tools/testing/selftests/kvm/aarch64/sdei.c | 170 +++++++++++++++++++++
 2 files changed, 171 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/sdei.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 4a166588d99f..37a8a71200b4 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -68,6 +68,7 @@ TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
 TEST_GEN_PROGS_aarch64 += set_memory_region_test
 TEST_GEN_PROGS_aarch64 += steal_time
+TEST_GEN_PROGS_aarch64 += aarch64/sdei
 
 TEST_GEN_PROGS_s390x = s390x/memop
 TEST_GEN_PROGS_s390x += s390x/resets
diff --git a/tools/testing/selftests/kvm/aarch64/sdei.c b/tools/testing/selftests/kvm/aarch64/sdei.c
new file mode 100644
index 000000000000..37b3d6644b10
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/sdei.c
@@ -0,0 +1,170 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * steal/stolen time test
+ *
+ * Copyright Gavin Shan, Redhat Inc 2020.
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "linux/arm_sdei.h"
+
+#define NR_VCPUS	2
+#define SDEI_GPA_BASE	(1 << 30)
+#define SDEI_EVENT_NUM	0x40200000
+
+struct sdei_event {
+	uint32_t	cpu;
+	uint64_t	version;
+	uint64_t	num;
+	uint64_t	type;
+	uint64_t	priority;
+	uint64_t	signaled;
+};
+
+static struct sdei_event sdei_events[NR_VCPUS];
+
+static int64_t smccc(uint32_t func, uint64_t arg0, uint64_t arg1,
+		     uint64_t arg2, uint64_t arg3, uint64_t arg4)
+{
+	int64_t ret;
+
+	asm volatile(
+		"mov    x0, %1\n"
+		"mov    x1, %2\n"
+		"mov    x2, %3\n"
+		"mov    x3, %4\n"
+		"mov    x4, %5\n"
+		"mov    x5, %6\n"
+		"hvc    #0\n"
+		"mov    %0, x0\n"
+	: "=r" (ret) : "r" (func), "r" (arg0), "r" (arg1),
+	"r" (arg2), "r" (arg3), "r" (arg4) :
+	"x0", "x1", "x2", "x3", "x4", "x5");
+
+	return ret;
+}
+
+static inline bool is_error(int64_t ret)
+{
+	if (ret == SDEI_NOT_SUPPORTED      ||
+	    ret == SDEI_INVALID_PARAMETERS ||
+	    ret == SDEI_DENIED             ||
+	    ret == SDEI_PENDING            ||
+	    ret == SDEI_OUT_OF_RESOURCE)
+		return true;
+
+	return false;
+}
+
+static void guest_code(int cpu)
+{
+	struct sdei_event *event = &sdei_events[cpu];
+	int64_t ret;
+
+	/* CPU */
+	event->cpu = cpu;
+	event->num = SDEI_EVENT_NUM;
+	GUEST_ASSERT(cpu < NR_VCPUS);
+
+	/* Version */
+	ret = smccc(SDEI_1_0_FN_SDEI_VERSION, 0, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+	GUEST_ASSERT(SDEI_VERSION_MAJOR(ret) == 1);
+	GUEST_ASSERT(SDEI_VERSION_MINOR(ret) == 0);
+	event->version = ret;
+
+	/* CPU unmasking */
+	ret = smccc(SDEI_1_0_FN_SDEI_PE_UNMASK, 0, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	/* Reset */
+	ret = smccc(SDEI_1_0_FN_SDEI_PRIVATE_RESET, 0, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+	ret = smccc(SDEI_1_0_FN_SDEI_SHARED_RESET, 0, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	/* Event properties */
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_GET_INFO,
+		     event->num, SDEI_EVENT_INFO_EV_TYPE, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+	event->type = ret;
+
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_GET_INFO,
+		    event->num, SDEI_EVENT_INFO_EV_PRIORITY, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+	event->priority = ret;
+
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_GET_INFO,
+		    event->num, SDEI_EVENT_INFO_EV_SIGNALED, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+	event->signaled = ret;
+
+	/* Event registration */
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_REGISTER,
+		    event->num, 0, 0, SDEI_EVENT_REGISTER_RM_ANY, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	/* Event enablement */
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_ENABLE,
+		    event->num, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	/* Event disablement */
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_DISABLE,
+		    event->num, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	/* Event unregistration */
+	ret = smccc(SDEI_1_0_FN_SDEI_EVENT_UNREGISTER,
+		    event->num, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	/* CPU masking */
+	ret = smccc(SDEI_1_0_FN_SDEI_PE_MASK, 0, 0, 0, 0, 0);
+	GUEST_ASSERT(!is_error(ret));
+
+	GUEST_DONE();
+}
+
+int main(int argc, char **argv)
+{
+	struct kvm_vm *vm;
+	int i;
+
+	if (!kvm_check_cap(KVM_CAP_ARM_SDEI)) {
+		pr_info("SDEI not supported\n");
+		return 0;
+	}
+
+	vm = vm_create_default(0, 0, guest_code);
+	ucall_init(vm, NULL);
+
+	for (i = 1; i < NR_VCPUS; i++)
+		vm_vcpu_add_default(vm, i, guest_code);
+
+	for (i = 0; i < NR_VCPUS; i++) {
+		vcpu_args_set(vm, i, 1, i);
+		vcpu_run(vm, i);
+
+		sync_global_from_guest(vm, sdei_events[i]);
+		pr_info("--------------------------------\n");
+		pr_info("CPU:      %d\n",
+			sdei_events[i].cpu);
+		pr_info("Version:  %ld.%ld (0x%lx)\n",
+			SDEI_VERSION_MAJOR(sdei_events[i].version),
+			SDEI_VERSION_MINOR(sdei_events[i].version),
+			SDEI_VERSION_VENDOR(sdei_events[i].version));
+		pr_info("Event:    0x%08lx\n",
+			sdei_events[i].num);
+		pr_info("Type:     %s\n",
+			sdei_events[i].type ? "shared" : "private");
+		pr_info("Signaled: %s\n",
+			sdei_events[i].signaled ? "yes" : "no");
+	}
+
+	return 0;
+}
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 00/18] Support SDEI Virtualization
  2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
                   ` (17 preceding siblings ...)
  2020-08-17 10:05 ` [PATCH 18/18] kvm/selftests: Add SDEI test case Gavin Shan
@ 2020-08-17 11:01 ` Gavin Shan
  18 siblings, 0 replies; 20+ messages in thread
From: Gavin Shan @ 2020-08-17 11:01 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, shan.gavin, pbonzini

On 8/17/20 8:05 PM, Gavin Shan wrote:
> This series intends to support SDEI virtualization. The background is
> the feature (Asynchronous Page Fault) needs virtualized SDEI event to
> deliver page-not-present notification from host to guest. This series
> depends on the series "Refactor SDEI Client Driver", which was posted
> previously. Both series can be found from github:
> 
>     https://developer.arm.com/documentation/den0054/a/
>     https://www.spinics.net/lists/arm-kernel/msg826783.html
>     https://github.com/gwshan/linux  ("sdei_client")
>     https://github.com/gwshan/linux  ("sdei")
> 
> First of all, bits[23:20] of the SDEI event number are reserved to
> indicate the SDEI event type:
> 
>     0x0: physical SDEI event number, originated from underly firmware
>     0x1: virtual SDEI event number, passed from KVM because of physical
>          SDEI event. The corresponding SDEI events are also called as
>          passthrou SDEI events.
>     0x2: KVM private SDEI event number, originated from KVM itself.
> 
> The implementation supports passthrou and KVM private SDEI events. The
> same SDEI event can be registered and enabled on multiple VMs. So the
> registered SDEI event is represented by "struct kvm_sdei_event" and
> formed into a linked list globally. "struct kvm_sdei_kvm_event" is
> created and inserted into the radix tree in "struct kvm_sdei_event",
> which is indexed by @kvm->userspace_pid if the corresponding SDEI event
> is registered on the particular KVM. Besides, "struct kvm_sdei_vcpu_event"
> is introduced to deliver SDEI event to one particular vCPU. So the data
> structs have different scopes, summaried as below:
> 
>     struct kvm_sdei_event: global scope
>     struct kvm_sdei_kvm_event: VM scope
>     struct kvm_sdei_vcpu_event: vCPU sope
> 
> For the passthrou SDEI events, the specific handler is registered to the
> underly firmware if it's supported. The core functionality of the handler
> is to route the incoming SDEI events to the target VM and vCPU. For the
> shared SDEI event, it's duplicated to all VMs where the SDEI event was
> registered and enabled. The target vCPU is chosen basing on the setting
> of routing affinity. For private SDEI event, the event received from the
> physical CPU is duplicated and delivered to the vCPUs, which are currently
> running or suspending on the physical CPU. For KVM private event, which is
> pre-defined and represented by "struct kvm_sdei_priv", API (kvm_sdei_inject())
> is always called to deliver the event to the specified vCPU.
> 
> The series is organized as below:
> 
> PATCH[01-02] Retrieve event signaled property on registration and add API
>               (sdei_event_get_info()) to retrieve event's information from
>               underly firmware for the passthrou SDEI events.
> PATCH[03]    Introduce template for smccc_get_argx().
> PATCH[04]    Adds the needed source files, data structs.
> PATCH[05-13] Support various hypercalls defined in SDEI specification (v1.0).
> PATCH[14]    Implements the SDEI handler to route the incoming passthrou SDEI
>               events to target VMs and vCPUs.
> PATCH[15-16] Support more hypercalls like COMPLETE, COMPLETE_AND_RESUME, and
>               CONTEXT.
> PATCH[17]    Support injecting KVM private SDEI event and expose the SDEI
>               capability.
> PATCH[18]    Add self-test case for KVM private SDEI event
> 

[+James/Mark/Eric]

The series was supposed to have been cc'ed to more folks, but the
"git send-email" didn't do it properly for me. I assume I needn't
resend it until I'm going to be asked for that explicitly.

Thanks,
Gavin

> Gavin Shan (18):
>    drivers/firmware/sdei: Retrieve event signaled property on
>      registration
>    drivers/firmware/sdei: Add sdei_event_get_info()
>    arm/smccc: Introduce template for inline functions
>    arm64/kvm: Add SDEI virtualization infrastructure
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_VERSION hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_REGISTER
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_{PRIVATE,SHARED}_RESET hypercall
>    arm64/kvm: Implement event handler
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{COMPLETE,
>      COMPLETE_AND_RESUME} hypercall
>    arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall
>    arm64/kvm: Expose SDEI capability
>    kvm/selftests: Add SDEI test case
> 
>   arch/arm64/include/asm/kvm_emulate.h       |    2 +
>   arch/arm64/include/asm/kvm_host.h          |   10 +
>   arch/arm64/include/asm/kvm_sdei.h          |  117 ++
>   arch/arm64/kvm/Makefile                    |    2 +-
>   arch/arm64/kvm/aarch32.c                   |    8 +
>   arch/arm64/kvm/arm.c                       |   19 +
>   arch/arm64/kvm/hypercalls.c                |   19 +
>   arch/arm64/kvm/inject_fault.c              |   30 +
>   arch/arm64/kvm/reset.c                     |    3 +
>   arch/arm64/kvm/sdei.c                      | 1322 ++++++++++++++++++++
>   drivers/firmware/arm_sdei.c                |   38 +
>   include/kvm/arm_hypercalls.h               |   34 +-
>   include/linux/arm_sdei.h                   |    7 +
>   include/uapi/linux/kvm.h                   |    4 +
>   tools/testing/selftests/kvm/Makefile       |    1 +
>   tools/testing/selftests/kvm/aarch64/sdei.c |  170 +++
>   16 files changed, 1766 insertions(+), 20 deletions(-)
>   create mode 100644 arch/arm64/include/asm/kvm_sdei.h
>   create mode 100644 arch/arm64/kvm/sdei.c
>   create mode 100644 tools/testing/selftests/kvm/aarch64/sdei.c
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-08-17 11:01 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-17 10:05 [PATCH 00/18] Support SDEI Virtualization Gavin Shan
2020-08-17 10:05 ` [PATCH 01/18] drivers/firmware/sdei: Retrieve event signaled property on registration Gavin Shan
2020-08-17 10:05 ` [PATCH 02/18] drivers/firmware/sdei: Add sdei_event_get_info() Gavin Shan
2020-08-17 10:05 ` [PATCH 03/18] arm/smccc: Introduce template for inline functions Gavin Shan
2020-08-17 10:05 ` [PATCH 04/18] arm64/kvm: Add SDEI virtualization infrastructure Gavin Shan
2020-08-17 10:05 ` [PATCH 05/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_VERSION hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 06/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_REGISTER Gavin Shan
2020-08-17 10:05 ` [PATCH 07/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{ENABLE, DISABLE} hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 08/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_UNREGISTER hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 09/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_STATUS hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 10/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_GET_INFO hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 11/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_ROUTING_SET hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 12/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_PE_{MASK, UNMASK} hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 13/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_{PRIVATE, SHARED}_RESET hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 14/18] arm64/kvm: Implement event handler Gavin Shan
2020-08-17 10:05 ` [PATCH 15/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_{COMPLETE, COMPLETE_AND_RESUME} hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 16/18] arm64/kvm: Support SDEI_1_0_FN_SDEI_EVENT_CONTEXT hypercall Gavin Shan
2020-08-17 10:05 ` [PATCH 17/18] arm64/kvm: Expose SDEI capability Gavin Shan
2020-08-17 10:05 ` [PATCH 18/18] kvm/selftests: Add SDEI test case Gavin Shan
2020-08-17 11:01 ` [PATCH 00/18] Support SDEI Virtualization Gavin Shan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).