From: "Adalbert Lazăr" <alazar@bitdefender.com>
To: kvm@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Mihai Donțu" <mdontu@bitdefender.com>,
"Nicușor Cîțu" <ncitu@bitdefender.com>,
"Adalbert Lazăr" <alazar@bitdefender.com>
Subject: [PATCH v9 66/84] KVM: introspection: add KVMI_VCPU_INJECT_EXCEPTION + KVMI_EVENT_TRAP
Date: Wed, 22 Jul 2020 00:09:04 +0300 [thread overview]
Message-ID: <20200721210922.7646-67-alazar@bitdefender.com> (raw)
In-Reply-To: <20200721210922.7646-1-alazar@bitdefender.com>
From: Mihai Donțu <mdontu@bitdefender.com>
The KVMI_VCPU_INJECT_EXCEPTION command is used by the introspection tool
to inject exceptions, for example, to get a page from swap.
The exception is queued right before entering in guest unless there is
already an exception pending. The introspection tool is notified with
an KVMI_EVENT_TRAP event about the success of the injection. In case
of failure, the introspecion tool is expected to try again later.
Signed-off-by: Mihai Donțu <mdontu@bitdefender.com>
Co-developed-by: Nicușor Cîțu <ncitu@bitdefender.com>
Signed-off-by: Nicușor Cîțu <ncitu@bitdefender.com>
Co-developed-by: Adalbert Lazăr <alazar@bitdefender.com>
Signed-off-by: Adalbert Lazăr <alazar@bitdefender.com>
---
Documentation/virt/kvm/kvmi.rst | 74 +++++++++++
arch/x86/kvm/kvmi.c | 103 ++++++++++++++++
arch/x86/kvm/x86.c | 3 +
include/linux/kvmi_host.h | 12 ++
include/uapi/linux/kvmi.h | 20 ++-
.../testing/selftests/kvm/x86_64/kvmi_test.c | 115 +++++++++++++++++-
virt/kvm/introspection/kvmi.c | 45 +++++++
virt/kvm/introspection/kvmi_int.h | 8 ++
virt/kvm/introspection/kvmi_msg.c | 50 ++++++--
9 files changed, 418 insertions(+), 12 deletions(-)
diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index e1f978fc799b..4263a9ac90e4 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -561,6 +561,7 @@ because these are sent as a result of certain commands (but they can be
disallowed by the device manager) ::
KVMI_EVENT_PAUSE_VCPU
+ KVMI_EVENT_TRAP
The VM events (e.g. *KVMI_EVENT_UNHOOK*) are controlled with
the *KVMI_VM_CONTROL_EVENTS* command.
@@ -749,6 +750,45 @@ ID set.
* -KVM_EINVAL - the padding is not zero
* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+16. KVMI_VCPU_INJECT_EXCEPTION
+------------------------------
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+ struct kvmi_vcpu_hdr;
+ struct kvmi_vcpu_inject_exception {
+ __u8 nr;
+ __u8 padding1;
+ __u16 padding2;
+ __u32 error_code;
+ __u64 address;
+ };
+
+:Returns:
+
+::
+
+ struct kvmi_error_code
+
+Injects a vCPU exception with or without an error code. In case of page fault
+exception, the guest virtual address has to be specified.
+
+The *KVMI_EVENT_TRAP* event will be sent with the effective injected
+exception.
+
+:Errors:
+
+* -KVM_EPERM - the *KVMI_EVENT_TRAP* event is disallowed
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EBUSY - another *KVMI_VCPU_INJECT_EXCEPTION*-*KVMI_EVENT_TRAP* pair
+ is in progress
+
Events
======
@@ -960,3 +1000,37 @@ register (see **KVMI_VCPU_CONTROL_EVENTS**).
``kvmi_event``, the control register number, the old value and the new value
are sent to the introspection tool. The *CONTINUE* action will set the ``new_val``.
+
+6. KVMI_EVENT_TRAP
+------------------
+
+:Architectures: all
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+ struct kvmi_event;
+ struct kvmi_event_trap {
+ __u8 nr;
+ __u8 padding1;
+ __u16 padding2;
+ __u32 error_code;
+ __u64 address;
+ };
+
+:Returns:
+
+::
+
+ struct kvmi_vcpu_hdr;
+ struct kvmi_event_reply;
+
+This event is sent if a previous *KVMI_VCPU_INJECT_EXCEPTION* command
+took place. Because it has a high priority, it will be sent before any
+other vCPU introspection event.
+
+``kvmi_event``, exception/interrupt number, exception code
+(``error_code``) and address are sent to the introspection tool,
+which should check if its exception has been injected or overridden.
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index e340a2c3500f..0c6ab136084f 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -569,3 +569,106 @@ bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu)
return ret;
}
EXPORT_SYMBOL(kvmi_cr3_intercepted);
+
+int kvmi_arch_cmd_vcpu_inject_exception(struct kvm_vcpu *vcpu, u8 vector,
+ u32 error_code, u64 address)
+{
+ struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+ bool has_error;
+
+ if (vcpui->exception.pending || vcpui->exception.send_event)
+ return -KVM_EBUSY;
+
+ vcpui->exception.pending = true;
+
+ has_error = x86_exception_has_error_code(vector);
+
+ vcpui->exception.nr = vector;
+ vcpui->exception.error_code = has_error ? error_code : 0;
+ vcpui->exception.error_code_valid = has_error;
+ vcpui->exception.address = address;
+
+ return 0;
+}
+
+static void kvmi_arch_queue_exception(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+ struct x86_exception e = {
+ .vector = vcpui->exception.nr,
+ .error_code_valid = vcpui->exception.error_code_valid,
+ .error_code = vcpui->exception.error_code,
+ .address = vcpui->exception.address,
+ };
+
+ if (e.vector == PF_VECTOR)
+ kvm_inject_page_fault(vcpu, &e);
+ else if (e.error_code_valid)
+ kvm_queue_exception_e(vcpu, e.vector, e.error_code);
+ else
+ kvm_queue_exception(vcpu, e.vector);
+}
+
+static void kvmi_arch_save_injected_event(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+ vcpui->exception.error_code = 0;
+ vcpui->exception.error_code_valid = false;
+
+ vcpui->exception.address = vcpu->arch.cr2;
+ if (vcpu->arch.exception.injected) {
+ vcpui->exception.nr = vcpu->arch.exception.nr;
+ vcpui->exception.error_code_valid =
+ x86_exception_has_error_code(vcpu->arch.exception.nr);
+ vcpui->exception.error_code = vcpu->arch.exception.error_code;
+ } else if (vcpu->arch.interrupt.injected) {
+ vcpui->exception.nr = vcpu->arch.interrupt.nr;
+ }
+}
+
+void kvmi_arch_inject_exception(struct kvm_vcpu *vcpu)
+{
+ if (!kvm_event_needs_reinjection(vcpu)) {
+ kvmi_arch_queue_exception(vcpu);
+ kvm_inject_pending_exception(vcpu);
+ }
+
+ kvmi_arch_save_injected_event(vcpu);
+}
+
+static u32 kvmi_send_trap(struct kvm_vcpu *vcpu, u8 nr,
+ u32 error_code, u64 addr)
+{
+ struct kvmi_event_trap e;
+ int err, action;
+
+ memset(&e, 0, sizeof(e));
+ e.nr = nr;
+ e.error_code = error_code;
+ e.address = addr;
+
+ err = __kvmi_send_event(vcpu, KVMI_EVENT_TRAP, &e, sizeof(e),
+ NULL, 0, &action);
+ if (err)
+ return KVMI_EVENT_ACTION_CONTINUE;
+
+ return action;
+}
+
+void kvmi_arch_send_trap_event(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+ u32 action;
+
+ action = kvmi_send_trap(vcpu, vcpui->exception.nr,
+ vcpui->exception.error_code,
+ vcpui->exception.address);
+
+ switch (action) {
+ case KVMI_EVENT_ACTION_CONTINUE:
+ break;
+ default:
+ kvmi_handle_common_event_actions(vcpu->kvm, action);
+ }
+}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a12aa8e125d3..af987ad1a174 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8566,6 +8566,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
goto cancel_injection;
}
+ if (!kvmi_enter_guest(vcpu))
+ req_immediate_exit = true;
+
if (req_immediate_exit) {
kvm_make_request(KVM_REQ_EVENT, vcpu);
kvm_x86_ops.request_immediate_exit(vcpu);
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 01219c56d042..1fae589d9d35 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -36,6 +36,15 @@ struct kvm_vcpu_introspection {
struct kvm_regs delayed_regs;
bool have_delayed_regs;
+
+ struct {
+ u8 nr;
+ u32 error_code;
+ bool error_code_valid;
+ u64 address;
+ bool pending;
+ bool send_event;
+ } exception;
};
struct kvm_introspection {
@@ -76,6 +85,7 @@ int kvmi_ioctl_preunhook(struct kvm *kvm);
void kvmi_handle_requests(struct kvm_vcpu *vcpu);
bool kvmi_hypercall_event(struct kvm_vcpu *vcpu);
bool kvmi_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len);
+bool kvmi_enter_guest(struct kvm_vcpu *vcpu);
#else
@@ -90,6 +100,8 @@ static inline bool kvmi_hypercall_event(struct kvm_vcpu *vcpu) { return false; }
static inline bool kvmi_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva,
u8 insn_len)
{ return true; }
+static inline bool kvmi_enter_guest(struct kvm_vcpu *vcpu)
+ { return true; }
#endif /* CONFIG_KVM_INTROSPECTION */
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index e31b474e3496..faf4624d7a97 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -34,7 +34,8 @@ enum {
KVMI_VM_CONTROL_CLEANUP = 14,
- KVMI_VCPU_CONTROL_CR = 15,
+ KVMI_VCPU_CONTROL_CR = 15,
+ KVMI_VCPU_INJECT_EXCEPTION = 16,
KVMI_NUM_MESSAGES
};
@@ -45,6 +46,7 @@ enum {
KVMI_EVENT_HYPERCALL = 2,
KVMI_EVENT_BREAKPOINT = 3,
KVMI_EVENT_CR = 4,
+ KVMI_EVENT_TRAP = 5,
KVMI_NUM_EVENTS
};
@@ -162,4 +164,20 @@ struct kvmi_event_reply {
__u32 padding2;
};
+struct kvmi_event_trap {
+ __u8 nr;
+ __u8 padding1;
+ __u16 padding2;
+ __u32 error_code;
+ __u64 address;
+};
+
+struct kvmi_vcpu_inject_exception {
+ __u8 nr;
+ __u8 padding1;
+ __u16 padding2;
+ __u32 error_code;
+ __u64 address;
+};
+
#endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 7694fa8fef89..9abf4ec0d09a 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -45,6 +45,8 @@ struct vcpu_worker_data {
int vcpu_id;
int test_id;
bool stop;
+ bool shutdown;
+ bool restart_on_shutdown;
};
enum {
@@ -687,11 +689,19 @@ static void *vcpu_worker(void *data)
vcpu_run(ctx->vm, ctx->vcpu_id);
- TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_IO
+ || (run->exit_reason == KVM_EXIT_SHUTDOWN
+ && ctx->shutdown),
"vcpu_run() failed, test_id %d, exit reason %u (%s)\n",
ctx->test_id, run->exit_reason,
exit_reason_str(run->exit_reason));
+ if (run->exit_reason == KVM_EXIT_SHUTDOWN) {
+ if (ctx->restart_on_shutdown)
+ continue;
+ break;
+ }
+
TEST_ASSERT(get_ucall(ctx->vm, ctx->vcpu_id, &uc),
"No guest request\n");
@@ -1308,6 +1318,108 @@ static void test_cmd_vcpu_control_cr(struct kvm_vm *vm)
test_invalid_vcpu_control_cr(vm);
}
+static void __inject_exception(int nr)
+{
+ struct {
+ struct kvmi_msg_hdr hdr;
+ struct kvmi_vcpu_hdr vcpu_hdr;
+ struct kvmi_vcpu_inject_exception cmd;
+ } req = {};
+ int r;
+
+ req.cmd.nr = nr;
+
+ r = __do_vcpu0_command(KVMI_VCPU_INJECT_EXCEPTION,
+ &req.hdr, sizeof(req), NULL, 0);
+ TEST_ASSERT(r == 0,
+ "KVMI_VCPU_INJECT_EXCEPTION failed, error %d(%s)\n",
+ -r, kvm_strerror(-r));
+}
+
+static void receive_exception_event(int nr)
+{
+ struct kvmi_msg_hdr hdr;
+ struct {
+ struct kvmi_event common;
+ struct kvmi_event_trap trap;
+ } ev;
+ struct vcpu_reply rpl = {};
+
+ receive_event(&hdr, &ev.common, sizeof(ev), KVMI_EVENT_TRAP);
+
+ pr_info("Exception event: vector %u, error_code 0x%x, address 0x%llx\n",
+ ev.trap.nr, ev.trap.error_code, ev.trap.address);
+
+ TEST_ASSERT(ev.trap.nr == nr,
+ "Injected exception %u instead of %u\n",
+ ev.trap.nr, nr);
+
+ reply_to_event(&hdr, &ev.common, KVMI_EVENT_ACTION_CONTINUE,
+ &rpl, sizeof(rpl));
+}
+
+static void test_succeded_ud_injection(void)
+{
+ __u8 ud_vector = 6;
+
+ __inject_exception(ud_vector);
+
+ receive_exception_event(ud_vector);
+}
+
+static void test_failed_ud_injection(struct kvm_vm *vm,
+ struct vcpu_worker_data *data)
+{
+ struct kvmi_msg_hdr hdr;
+ struct {
+ struct kvmi_event common;
+ struct kvmi_event_breakpoint bp;
+ } ev;
+ struct vcpu_reply rpl = {};
+ __u8 ud_vector = 6, bp_vector = 3;
+
+ WRITE_ONCE(data->test_id, GUEST_TEST_BP);
+
+ receive_event(&hdr, &ev.common, sizeof(ev), KVMI_EVENT_BREAKPOINT);
+
+ /* skip the breakpoint instruction, next time guest_bp_test() runs */
+ ev.common.arch.regs.rip += ev.bp.insn_len;
+ __set_registers(vm, &ev.common.arch.regs);
+
+ __inject_exception(ud_vector);
+
+ /* reinject the #BP exception because of the continue action */
+ reply_to_event(&hdr, &ev.common, KVMI_EVENT_ACTION_CONTINUE,
+ &rpl, sizeof(rpl));
+
+ receive_exception_event(bp_vector);
+}
+
+static void test_cmd_vcpu_inject_exception(struct kvm_vm *vm)
+{
+ struct vcpu_worker_data data = {
+ .vm = vm,
+ .vcpu_id = VCPU_ID,
+ .shutdown = true,
+ .restart_on_shutdown = true,
+ };
+ pthread_t vcpu_thread;
+
+ if (!is_intel_cpu()) {
+ print_skip("TODO: %s() - make it work with AMD", __func__);
+ return;
+ }
+
+ enable_vcpu_event(vm, KVMI_EVENT_BREAKPOINT);
+ vcpu_thread = start_vcpu_worker(&data);
+
+ test_succeded_ud_injection();
+ test_failed_ud_injection(vm, &data);
+
+ stop_vcpu_worker(vcpu_thread, &data);
+ disable_vcpu_event(vm, KVMI_EVENT_BREAKPOINT);
+}
+
static void test_introspection(struct kvm_vm *vm)
{
srandom(time(0));
@@ -1332,6 +1444,7 @@ static void test_introspection(struct kvm_vm *vm)
test_event_breakpoint(vm);
test_cmd_vm_control_cleanup(vm);
test_cmd_vcpu_control_cr(vm);
+ test_cmd_vcpu_inject_exception(vm);
unhook_introspection(vm);
}
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 2dd82aa5e11c..d4b39d0800ee 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -101,6 +101,7 @@ static void setup_known_events(void)
set_bit(KVMI_EVENT_CR, Kvmi_known_vcpu_events);
set_bit(KVMI_EVENT_HYPERCALL, Kvmi_known_vcpu_events);
set_bit(KVMI_EVENT_PAUSE_VCPU, Kvmi_known_vcpu_events);
+ set_bit(KVMI_EVENT_TRAP, Kvmi_known_vcpu_events);
bitmap_or(Kvmi_known_events, Kvmi_known_vm_events,
Kvmi_known_vcpu_events, KVMI_NUM_EVENTS);
@@ -855,6 +856,16 @@ static void kvmi_vcpu_pause_event(struct kvm_vcpu *vcpu)
}
}
+void kvmi_send_pending_event(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+ if (vcpui->exception.send_event) {
+ vcpui->exception.send_event = false;
+ kvmi_arch_send_trap_event(vcpu);
+ }
+}
+
void kvmi_handle_requests(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
@@ -864,6 +875,8 @@ void kvmi_handle_requests(struct kvm_vcpu *vcpu)
if (!kvmi)
goto out;
+ kvmi_send_pending_event(vcpu);
+
for (;;) {
kvmi_run_jobs(vcpu);
@@ -962,3 +975,35 @@ bool kvmi_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len)
return ret;
}
EXPORT_SYMBOL(kvmi_breakpoint_event);
+
+static void kvmi_inject_pending_exception(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+ kvmi_arch_inject_exception(vcpu);
+
+ vcpui->exception.pending = false;
+ vcpui->exception.send_event = true;
+ kvm_make_request(KVM_REQ_INTROSPECTION, vcpu);
+}
+
+bool kvmi_enter_guest(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_introspection *vcpui;
+ struct kvm_introspection *kvmi;
+ bool r = true;
+
+ kvmi = kvmi_get(vcpu->kvm);
+ if (!kvmi)
+ return true;
+
+ vcpui = VCPUI(vcpu);
+
+ if (vcpui->exception.pending) {
+ kvmi_inject_pending_exception(vcpu);
+ r = false;
+ }
+
+ kvmi_put(vcpu->kvm);
+ return r;
+}
diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h
index c206376eb0ad..51c03097a7d5 100644
--- a/virt/kvm/introspection/kvmi_int.h
+++ b/virt/kvm/introspection/kvmi_int.h
@@ -34,6 +34,9 @@ bool kvmi_msg_process(struct kvm_introspection *kvmi);
int kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id,
void *ev, size_t ev_size,
void *rpl, size_t rpl_size, int *action);
+int __kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id,
+ void *ev, size_t ev_size,
+ void *rpl, size_t rpl_size, int *action);
int kvmi_msg_send_unhook(struct kvm_introspection *kvmi);
u32 kvmi_msg_send_vcpu_pause(struct kvm_vcpu *vcpu);
u32 kvmi_msg_send_hypercall(struct kvm_vcpu *vcpu);
@@ -55,6 +58,7 @@ void kvmi_handle_common_event_actions(struct kvm *kvm, u32 action);
void kvmi_cmd_vm_control_cleanup(struct kvm_introspection *kvmi, bool enable);
struct kvm_introspection * __must_check kvmi_get(struct kvm *kvm);
void kvmi_put(struct kvm *kvm);
+void kvmi_send_pending_event(struct kvm_vcpu *vcpu);
int kvmi_cmd_vm_control_events(struct kvm_introspection *kvmi,
unsigned int event_id, bool enable);
int kvmi_cmd_vcpu_control_events(struct kvm_vcpu *vcpu,
@@ -97,5 +101,9 @@ int kvmi_arch_cmd_control_intercept(struct kvm_vcpu *vcpu,
unsigned int event_id, bool enable);
int kvmi_arch_cmd_vcpu_control_cr(struct kvm_vcpu *vcpu,
const struct kvmi_vcpu_control_cr *req);
+int kvmi_arch_cmd_vcpu_inject_exception(struct kvm_vcpu *vcpu, u8 vector,
+ u32 error_code, u64 address);
+void kvmi_arch_send_trap_event(struct kvm_vcpu *vcpu);
+void kvmi_arch_inject_exception(struct kvm_vcpu *vcpu);
#endif
diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c
index 330fad27e1df..63efb85ff1ae 100644
--- a/virt/kvm/introspection/kvmi_msg.c
+++ b/virt/kvm/introspection/kvmi_msg.c
@@ -492,6 +492,25 @@ static int handle_vcpu_control_cr(const struct kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
}
+static int handle_vcpu_inject_exception(const struct kvmi_vcpu_msg_job *job,
+ const struct kvmi_msg_hdr *msg,
+ const void *_req)
+{
+ const struct kvmi_vcpu_inject_exception *req = _req;
+ int ec;
+
+ if (!is_event_allowed(KVMI(job->vcpu->kvm), KVMI_EVENT_TRAP))
+ ec = -KVM_EPERM;
+ else if (req->padding1 || req->padding2)
+ ec = -KVM_EINVAL;
+ else
+ ec = kvmi_arch_cmd_vcpu_inject_exception(job->vcpu, req->nr,
+ req->error_code,
+ req->address);
+
+ return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
+}
+
/*
* These functions are executed from the vCPU thread. The receiving thread
* passes the messages using a newly allocated 'struct kvmi_vcpu_msg_job'
@@ -500,13 +519,14 @@ static int handle_vcpu_control_cr(const struct kvmi_vcpu_msg_job *job,
*/
static int(*const msg_vcpu[])(const struct kvmi_vcpu_msg_job *,
const struct kvmi_msg_hdr *, const void *) = {
- [KVMI_EVENT] = handle_vcpu_event_reply,
- [KVMI_VCPU_CONTROL_CR] = handle_vcpu_control_cr,
- [KVMI_VCPU_CONTROL_EVENTS] = handle_vcpu_control_events,
- [KVMI_VCPU_GET_CPUID] = handle_vcpu_get_cpuid,
- [KVMI_VCPU_GET_INFO] = handle_vcpu_get_info,
- [KVMI_VCPU_GET_REGISTERS] = handle_vcpu_get_registers,
- [KVMI_VCPU_SET_REGISTERS] = handle_vcpu_set_registers,
+ [KVMI_EVENT] = handle_vcpu_event_reply,
+ [KVMI_VCPU_CONTROL_CR] = handle_vcpu_control_cr,
+ [KVMI_VCPU_CONTROL_EVENTS] = handle_vcpu_control_events,
+ [KVMI_VCPU_GET_CPUID] = handle_vcpu_get_cpuid,
+ [KVMI_VCPU_GET_INFO] = handle_vcpu_get_info,
+ [KVMI_VCPU_GET_REGISTERS] = handle_vcpu_get_registers,
+ [KVMI_VCPU_INJECT_EXCEPTION] = handle_vcpu_inject_exception,
+ [KVMI_VCPU_SET_REGISTERS] = handle_vcpu_set_registers,
};
static bool is_vcpu_command(u16 id)
@@ -770,9 +790,9 @@ static void kvmi_setup_vcpu_reply(struct kvm_vcpu_introspection *vcpui,
vcpui->waiting_for_reply = true;
}
-int kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id,
- void *ev, size_t ev_size,
- void *rpl, size_t rpl_size, int *action)
+int __kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id,
+ void *ev, size_t ev_size,
+ void *rpl, size_t rpl_size, int *action)
{
struct kvmi_msg_hdr hdr;
struct kvmi_event common;
@@ -812,6 +832,16 @@ int kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id,
return err;
}
+int kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id,
+ void *ev, size_t ev_size,
+ void *rpl, size_t rpl_size, int *action)
+{
+ kvmi_send_pending_event(vcpu);
+
+ return __kvmi_send_event(vcpu, ev_id, ev, ev_size,
+ rpl, rpl_size, action);
+}
+
u32 kvmi_msg_send_vcpu_pause(struct kvm_vcpu *vcpu)
{
int err, action;
next prev parent reply other threads:[~2020-07-21 21:09 UTC|newest]
Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-21 21:07 [PATCH v9 00/84] VM introspection Adalbert Lazăr
2020-07-21 21:07 ` [PATCH v9 01/84] signal: export kill_pid_info() Adalbert Lazăr
2020-07-22 6:36 ` Christoph Hellwig
2020-07-21 21:08 ` [PATCH v9 02/84] KVM: UAPI: add error codes used by the VM introspection code Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 03/84] KVM: add kvm_vcpu_kick_and_wait() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 04/84] KVM: add kvm_get_max_gfn() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 05/84] KVM: doc: fix the hypercall numbering Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 06/84] KVM: x86: add kvm_arch_vcpu_get_regs() and kvm_arch_vcpu_get_sregs() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 07/84] KVM: x86: add kvm_arch_vcpu_set_regs() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 08/84] KVM: x86: avoid injecting #PF when emulate the VMCALL instruction Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 09/84] KVM: x86: add .bp_intercepted() to struct kvm_x86_ops Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 10/84] KVM: x86: add .control_cr3_intercept() " Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 11/84] KVM: x86: add .cr3_write_intercepted() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 12/84] KVM: x86: add .desc_ctrl_supported() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 13/84] KVM: svm: add support for descriptor-table exits Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 14/84] KVM: x86: add .control_desc_intercept() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 15/84] KVM: x86: add .desc_intercepted() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 16/84] KVM: x86: export .msr_write_intercepted() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 17/84] KVM: x86: use MSR_TYPE_R, MSR_TYPE_W and MSR_TYPE_RW with AMD Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 18/84] KVM: svm: pass struct kvm_vcpu to set_msr_interception() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 19/84] KVM: vmx: pass struct kvm_vcpu to the intercept msr related functions Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 20/84] KVM: x86: add .control_msr_intercept() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 21/84] KVM: x86: vmx: use a symbolic constant when checking the exit qualifications Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 22/84] KVM: x86: save the error code during EPT/NPF exits handling Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 23/84] KVM: x86: add .fault_gla() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 24/84] KVM: x86: add .spt_fault() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 25/84] KVM: x86: add .gpt_translation_fault() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 26/84] KVM: x86: add .control_singlestep() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 27/84] KVM: x86: export kvm_arch_vcpu_set_guest_debug() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 28/84] KVM: x86: extend kvm_mmu_gva_to_gpa_system() with the 'access' parameter Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 29/84] KVM: x86: export kvm_inject_pending_exception() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 30/84] KVM: x86: export kvm_vcpu_ioctl_x86_get_xsave() Adalbert Lazăr
2020-07-22 1:31 ` kernel test robot
2020-07-21 21:08 ` [PATCH v9 31/84] KVM: x86: export kvm_vcpu_ioctl_x86_set_xsave() Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 32/84] KVM: x86: page track: provide all callbacks with the guest virtual address Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 33/84] KVM: x86: page track: add track_create_slot() callback Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 34/84] KVM: x86: page_track: add support for preread, prewrite and preexec Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 35/84] KVM: x86: wire in the preread/prewrite/preexec page trackers Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 36/84] KVM: x86: disable gpa_available optimization for fetch and page-walk SPT violations Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 37/84] KVM: introduce VM introspection Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 38/84] KVM: introspection: add hook/unhook ioctls Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 39/84] KVM: introspection: add permission access ioctls Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 40/84] KVM: introspection: add the read/dispatch message function Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 41/84] KVM: introspection: add KVMI_GET_VERSION Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 42/84] KVM: introspection: add KVMI_VM_CHECK_COMMAND and KVMI_VM_CHECK_EVENT Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 43/84] KVM: introspection: add KVMI_VM_GET_INFO Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 44/84] KVM: introspection: add KVMI_EVENT_UNHOOK Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 45/84] KVM: introspection: add KVMI_VM_CONTROL_EVENTS Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 46/84] KVM: introspection: add KVMI_VM_READ_PHYSICAL/KVMI_VM_WRITE_PHYSICAL Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 47/84] KVM: introspection: add vCPU related data Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 48/84] KVM: introspection: add a jobs list to every introspected vCPU Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 49/84] KVM: introspection: handle vCPU introspection requests Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 50/84] KVM: introspection: handle vCPU commands Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 51/84] KVM: introspection: add KVMI_VCPU_GET_INFO Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 52/84] KVM: introspection: add KVMI_VCPU_PAUSE Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 53/84] KVM: introspection: add KVMI_EVENT_PAUSE_VCPU Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 54/84] KVM: introspection: add the crash action handling on the event reply Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 55/84] KVM: introspection: add KVMI_VCPU_CONTROL_EVENTS Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 56/84] KVM: introspection: add KVMI_VCPU_GET_REGISTERS Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 57/84] KVM: introspection: add KVMI_VCPU_SET_REGISTERS Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 58/84] KVM: introspection: add KVMI_VCPU_GET_CPUID Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 59/84] KVM: introspection: add KVMI_EVENT_HYPERCALL Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 60/84] KVM: introspection: add KVMI_EVENT_BREAKPOINT Adalbert Lazăr
2020-07-21 21:08 ` [PATCH v9 61/84] KVM: introspection: add cleanup support for vCPUs Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 62/84] KVM: introspection: restore the state of #BP interception on unhook Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 63/84] KVM: introspection: add KVMI_VM_CONTROL_CLEANUP Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 64/84] KVM: introspection: add KVMI_VCPU_CONTROL_CR and KVMI_EVENT_CR Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 65/84] KVM: introspection: restore the state of CR3 interception on unhook Adalbert Lazăr
2020-07-21 21:09 ` Adalbert Lazăr [this message]
2020-07-21 21:09 ` [PATCH v9 67/84] KVM: introspection: add KVMI_VM_GET_MAX_GFN Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 68/84] KVM: introspection: add KVMI_EVENT_XSETBV Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 69/84] KVM: introspection: add KVMI_VCPU_GET_XCR Adalbert Lazăr
2020-07-22 8:25 ` kernel test robot
2020-07-21 21:09 ` [PATCH v9 70/84] KVM: introspection: add KVMI_VCPU_GET_XSAVE Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 71/84] KVM: introspection: add KVMI_VCPU_SET_XSAVE Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 72/84] KVM: introspection: add KVMI_VCPU_GET_MTRR_TYPE Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 73/84] KVM: introspection: add KVMI_EVENT_DESCRIPTOR Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 74/84] KVM: introspection: restore the state of descriptor-table register interception on unhook Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 75/84] KVM: introspection: add KVMI_VCPU_CONTROL_MSR and KVMI_EVENT_MSR Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 76/84] KVM: introspection: restore the state of MSR interception on unhook Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 77/84] KVM: introspection: add KVMI_VM_SET_PAGE_ACCESS Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 78/84] KVM: introspection: add KVMI_EVENT_PF Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 79/84] KVM: introspection: extend KVMI_GET_VERSION with struct kvmi_features Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 80/84] KVM: introspection: add KVMI_VCPU_CONTROL_SINGLESTEP Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 81/84] KVM: introspection: add KVMI_EVENT_SINGLESTEP Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 82/84] KVM: introspection: add KVMI_VCPU_TRANSLATE_GVA Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 83/84] KVM: introspection: emulate a guest page table walk on SPT violations due to A/D bit updates Adalbert Lazăr
2020-07-21 21:09 ` [PATCH v9 84/84] KVM: x86: call the page tracking code on emulation failure Adalbert Lazăr
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200721210922.7646-67-alazar@bitdefender.com \
--to=alazar@bitdefender.com \
--cc=kvm@vger.kernel.org \
--cc=mdontu@bitdefender.com \
--cc=ncitu@bitdefender.com \
--cc=pbonzini@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).