From: "Adalbert Lazăr" <alazar@bitdefender.com>
To: kvm@vger.kernel.org
Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Radim Krčmář" <rkrcmar@redhat.com>,
"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>,
"Tamas K Lengyel" <tamas@tklengyel.com>,
"Mathieu Tarral" <mathieu.tarral@protonmail.com>,
"Samuel Laurén" <samuel.lauren@iki.fi>,
"Patrick Colp" <patrick.colp@oracle.com>,
"Jan Kiszka" <jan.kiszka@siemens.com>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Weijiang Yang" <weijiang.yang@intel.com>,
Zhang@kvack.org, "Yu C" <yu.c.zhang@intel.com>,
"Mihai Donțu" <mdontu@bitdefender.com>,
"Adalbert Lazăr" <alazar@bitdefender.com>,
"Mircea Cîrjaliu" <mcirjaliu@bitdefender.com>
Subject: [RFC PATCH v6 73/92] kvm: introspection: use remote mapping
Date: Fri, 9 Aug 2019 19:00:28 +0300 [thread overview]
Message-ID: <20190809160047.8319-74-alazar@bitdefender.com> (raw)
In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com>
From: Mircea Cîrjaliu <mcirjaliu@bitdefender.com>
This commit adds the missing KVMI_GET_MAP_TOKEN command and handle the
hypercalls used to map/unmap guest pages.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Mircea Cîrjaliu <mcirjaliu@bitdefender.com>
Signed-off-by: Adalbert Lazăr <alazar@bitdefender.com>
---
Documentation/virtual/kvm/kvmi.rst | 39 ++++
arch/x86/kvm/Makefile | 2 +-
arch/x86/kvm/x86.c | 6 +
include/linux/kvmi.h | 3 +
virt/kvm/kvmi.c | 12 +-
virt/kvm/kvmi_int.h | 10 +
virt/kvm/kvmi_mem.c | 319 +++++++++++++++++++++++++++++
virt/kvm/kvmi_msg.c | 15 ++
8 files changed, 404 insertions(+), 2 deletions(-)
create mode 100644 virt/kvm/kvmi_mem.c
diff --git a/Documentation/virtual/kvm/kvmi.rst b/Documentation/virtual/kvm/kvmi.rst
index 572abab1f6ef..b12e14f14c21 100644
--- a/Documentation/virtual/kvm/kvmi.rst
+++ b/Documentation/virtual/kvm/kvmi.rst
@@ -1144,6 +1144,45 @@ Returns the guest memory type for a specific physical address.
* -KVM_EINVAL - padding is not zero
* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+25. KVMI_GET_MAP_TOKEN
+----------------------
+
+:Architecture: all
+:Versions: >= 1
+:Parameters: none
+:Returns:
+
+::
+
+ struct kvmi_error_code;
+ struct kvmi_get_map_token_reply {
+ struct kvmi_map_mem_token token;
+ };
+
+Where::
+
+ struct kvmi_map_mem_token {
+ __u64 token[4];
+ };
+
+Requests a token for a memory map operation.
+
+On this command, the host generates a random token to be used (once)
+to map a physical page from the introspected guest. The introspector
+could use the token with the KVM_INTRO_MEM_MAP ioctl (on /dev/kvmmem)
+to map a guest physical page to one of its memory pages. The ioctl,
+in turn, will use the KVM_HC_MEM_MAP hypercall (see hypercalls.txt).
+
+The guest kernel exposing /dev/kvmmem keeps a list with all the mappings
+(to all the guests introspected by the tool) in order to unmap them
+(using the KVM_HC_MEM_UNMAP hypercall) when /dev/kvmmem is closed or on
+demand (using the KVM_INTRO_MEM_UNMAP ioctl).
+
+:Errors:
+
+* -KVM_EAGAIN - too many tokens have accumulated
+* -KVM_ENOMEM - not enough memory to allocate a new token
+
Events
======
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 673cf37c0747..5bea446219ca 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -7,7 +7,7 @@ KVM := ../../../virt/kvm
kvm-y += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o
kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
-kvm-$(CONFIG_KVM_INTROSPECTION) += $(KVM)/kvmi.o $(KVM)/kvmi_msg.o kvmi.o
+kvm-$(CONFIG_KVM_INTROSPECTION) += $(KVM)/kvmi.o $(KVM)/kvmi_msg.o $(KVM)/kvmi_mem.o kvmi.o
kvm-y += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \
i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 06f44ce8ed07..04b1d2916a0a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7337,6 +7337,12 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
ret = kvm_pv_send_ipi(vcpu->kvm, a0, a1, a2, a3, op_64_bit);
break;
#ifdef CONFIG_KVM_INTROSPECTION
+ case KVM_HC_MEM_MAP:
+ ret = kvmi_host_mem_map(vcpu, (gva_t)a0, (gpa_t)a1, (gpa_t)a2);
+ break;
+ case KVM_HC_MEM_UNMAP:
+ ret = kvmi_host_mem_unmap(vcpu, (gpa_t)a0);
+ break;
case KVM_HC_XEN_HVM_OP:
ret = 0;
if (!kvmi_hypercall_event(vcpu))
diff --git a/include/linux/kvmi.h b/include/linux/kvmi.h
index 10cd6c6412d2..dd980fb0ebcd 100644
--- a/include/linux/kvmi.h
+++ b/include/linux/kvmi.h
@@ -24,6 +24,9 @@ bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, u8 write);
bool kvmi_tracked_gfn(struct kvm_vcpu *vcpu, gfn_t gfn);
bool kvmi_single_step(struct kvm_vcpu *vcpu, gpa_t gpa, int *emulation_type);
void kvmi_handle_requests(struct kvm_vcpu *vcpu);
+int kvmi_host_mem_map(struct kvm_vcpu *vcpu, gva_t tkn_gva,
+ gpa_t req_gpa, gpa_t map_gpa);
+int kvmi_host_mem_unmap(struct kvm_vcpu *vcpu, gpa_t map_gpa);
void kvmi_stop_ss(struct kvm_vcpu *vcpu);
bool kvmi_vcpu_enabled_ss(struct kvm_vcpu *vcpu);
void kvmi_init_emulate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvmi.c b/virt/kvm/kvmi.c
index ca146ffec061..157f3a401d64 100644
--- a/virt/kvm/kvmi.c
+++ b/virt/kvm/kvmi.c
@@ -10,6 +10,7 @@
#include "kvmi_int.h"
#include <linux/kthread.h>
#include <linux/bitmap.h>
+#include <linux/remote_mapping.h>
#define MAX_PAUSE_REQUESTS 1001
@@ -320,11 +321,13 @@ static int kvmi_cache_create(void)
int kvmi_init(void)
{
+ kvmi_mem_init();
return kvmi_cache_create();
}
void kvmi_uninit(void)
{
+ kvmi_mem_exit();
kvmi_cache_destroy();
}
@@ -1647,6 +1650,11 @@ int kvmi_cmd_write_physical(struct kvm *kvm, u64 gpa, u64 size, const void *buf)
return 0;
}
+int kvmi_cmd_alloc_token(struct kvm *kvm, struct kvmi_map_mem_token *token)
+{
+ return kvmi_mem_generate_token(kvm, token);
+}
+
int kvmi_cmd_control_events(struct kvm_vcpu *vcpu, unsigned int event_id,
bool enable)
{
@@ -2015,7 +2023,9 @@ int kvmi_ioctl_unhook(struct kvm *kvm, bool force_reset)
if (!ikvm)
return -EFAULT;
- if (!force_reset && !kvmi_unhook_event(kvm))
+ if (force_reset)
+ mm_remote_reset();
+ else if (!kvmi_unhook_event(kvm))
err = -ENOENT;
kvmi_put(kvm);
diff --git a/virt/kvm/kvmi_int.h b/virt/kvm/kvmi_int.h
index c96fa2b1e9b7..2432377d6371 100644
--- a/virt/kvm/kvmi_int.h
+++ b/virt/kvm/kvmi_int.h
@@ -148,6 +148,8 @@ struct kvmi {
struct task_struct *recv;
atomic_t ev_seq;
+ atomic_t num_tokens;
+
uuid_t uuid;
DECLARE_BITMAP(cmd_allow_mask, KVMI_NUM_COMMANDS);
@@ -229,7 +231,9 @@ int kvmi_cmd_control_events(struct kvm_vcpu *vcpu, unsigned int event_id,
bool enable);
int kvmi_cmd_control_vm_events(struct kvmi *ikvm, unsigned int event_id,
bool enable);
+int kvmi_cmd_alloc_token(struct kvm *kvm, struct kvmi_map_mem_token *token);
int kvmi_cmd_pause_vcpu(struct kvm_vcpu *vcpu, bool wait);
+unsigned long gfn_to_hva_safe(struct kvm *kvm, gfn_t gfn);
struct kvmi * __must_check kvmi_get(struct kvm *kvm);
void kvmi_put(struct kvm *kvm);
int kvmi_run_jobs_and_wait(struct kvm_vcpu *vcpu);
@@ -298,4 +302,10 @@ int kvmi_arch_cmd_control_msr(struct kvm_vcpu *vcpu,
const struct kvmi_control_msr *req);
int kvmi_arch_cmd_get_mtrr_type(struct kvm_vcpu *vcpu, u64 gpa, u8 *type);
+/* kvmi_mem.c */
+void kvmi_mem_init(void);
+void kvmi_mem_exit(void);
+int kvmi_mem_generate_token(struct kvm *kvm, struct kvmi_map_mem_token *token);
+void kvmi_clear_vm_tokens(struct kvm *kvm);
+
#endif
diff --git a/virt/kvm/kvmi_mem.c b/virt/kvm/kvmi_mem.c
new file mode 100644
index 000000000000..6244add60062
--- /dev/null
+++ b/virt/kvm/kvmi_mem.c
@@ -0,0 +1,319 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KVM introspection memory mapping implementation
+ *
+ * Copyright (C) 2017-2019 Bitdefender S.R.L.
+ *
+ * Author:
+ * Mircea Cirjaliu <mcirjaliu@bitdefender.com>
+ */
+
+#include <linux/kernel.h>
+#include <linux/kvm_host.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/pagemap.h>
+#include <linux/spinlock.h>
+#include <linux/printk.h>
+#include <linux/random.h>
+#include <linux/kvmi.h>
+#include <linux/ktime.h>
+#include <linux/hrtimer.h>
+#include <linux/workqueue.h>
+#include <linux/remote_mapping.h>
+
+#include <uapi/linux/kvmi.h>
+
+#include "kvmi_int.h"
+
+#define KVMI_MEM_MAX_TOKENS 8
+#define KVMI_MEM_TOKEN_TIMEOUT 3
+#define TOKEN_TIMEOUT_NSEC (KVMI_MEM_TOKEN_TIMEOUT * NSEC_PER_SEC)
+
+static struct list_head token_list;
+static spinlock_t token_lock;
+static struct hrtimer token_timer;
+static struct work_struct token_work;
+
+struct token_entry {
+ struct list_head token_list;
+ struct kvmi_map_mem_token token;
+ struct kvm *kvm;
+ ktime_t timestamp;
+};
+
+void kvmi_clear_vm_tokens(struct kvm *kvm)
+{
+ struct token_entry *cur, *next;
+ struct kvmi *ikvm = IKVM(kvm);
+ struct list_head temp;
+
+ INIT_LIST_HEAD(&temp);
+
+ spin_lock(&token_lock);
+ list_for_each_entry_safe(cur, next, &token_list, token_list) {
+ if (cur->kvm == kvm) {
+ atomic_dec(&ikvm->num_tokens);
+
+ list_del(&cur->token_list);
+ list_add(&cur->token_list, &temp);
+ }
+ }
+ spin_unlock(&token_lock);
+
+ /* freeing a KVM may sleep */
+ list_for_each_entry_safe(cur, next, &temp, token_list) {
+ kvm_put_kvm(cur->kvm);
+ kfree(cur);
+ }
+}
+
+static void token_timeout_work(struct work_struct *work)
+{
+ struct token_entry *cur, *next;
+ ktime_t now = ktime_get();
+ struct kvmi *ikvm;
+ struct list_head temp;
+
+ INIT_LIST_HEAD(&temp);
+
+ spin_lock(&token_lock);
+ list_for_each_entry_safe(cur, next, &token_list, token_list)
+ if (ktime_sub(now, cur->timestamp) > TOKEN_TIMEOUT_NSEC) {
+ ikvm = kvmi_get(cur->kvm);
+ if (ikvm) {
+ atomic_dec(&ikvm->num_tokens);
+ kvmi_put(cur->kvm);
+ }
+
+ list_del(&cur->token_list);
+ list_add(&cur->token_list, &temp);
+ }
+ spin_unlock(&token_lock);
+
+ if (!list_empty(&temp))
+ kvm_info("kvmi: token(s) timed out\n");
+
+ /* freeing a KVM may sleep */
+ list_for_each_entry_safe(cur, next, &temp, token_list) {
+ kvm_put_kvm(cur->kvm);
+ kfree(cur);
+ }
+}
+
+static enum hrtimer_restart token_timer_fn(struct hrtimer *timer)
+{
+ schedule_work(&token_work);
+
+ hrtimer_add_expires_ns(timer, NSEC_PER_SEC);
+ return HRTIMER_RESTART;
+}
+
+int kvmi_mem_generate_token(struct kvm *kvm, struct kvmi_map_mem_token *token)
+{
+ struct kvmi *ikvm;
+ struct token_entry *tep;
+
+ /* too many tokens have accumulated, retry later */
+ ikvm = IKVM(kvm);
+ if (atomic_read(&ikvm->num_tokens) > KVMI_MEM_MAX_TOKENS)
+ return -KVM_EAGAIN;
+
+ print_hex_dump_debug("kvmi: new token ", DUMP_PREFIX_NONE,
+ 32, 1, token, sizeof(*token), false);
+
+ tep = kmalloc(sizeof(*tep), GFP_KERNEL);
+ if (tep == NULL)
+ return -KVM_ENOMEM;
+
+ /* pin KVM so it won't go away while we wait for HC */
+ kvm_get_kvm(kvm);
+ get_random_bytes(token, sizeof(*token));
+ atomic_inc(&ikvm->num_tokens);
+
+ /* init token entry */
+ INIT_LIST_HEAD(&tep->token_list);
+ memcpy(&tep->token, token, sizeof(*token));
+ tep->kvm = kvm;
+ tep->timestamp = ktime_get();
+
+ /* add to list */
+ spin_lock(&token_lock);
+ list_add_tail(&tep->token_list, &token_list);
+ spin_unlock(&token_lock);
+
+ return 0;
+}
+
+static struct kvm *find_machine_at(struct kvm_vcpu *vcpu, gva_t tkn_gva)
+{
+ long result;
+ gpa_t tkn_gpa;
+ struct kvmi_map_mem_token token;
+ struct list_head *cur;
+ struct token_entry *tep, *found = NULL;
+ struct kvm *target_kvm = NULL;
+ struct kvmi *ikvm;
+
+ /* machine token is passed as pointer */
+ tkn_gpa = kvm_mmu_gva_to_gpa_system(vcpu, tkn_gva, 0, NULL);
+ if (tkn_gpa == UNMAPPED_GVA)
+ return NULL;
+
+ /* copy token to local address space */
+ result = kvm_read_guest(vcpu->kvm, tkn_gpa, &token, sizeof(token));
+ if (IS_ERR_VALUE(result)) {
+ kvm_err("kvmi: failed copying token from user\n");
+ return ERR_PTR(result);
+ }
+
+ /* consume token & find the VM */
+ spin_lock(&token_lock);
+ list_for_each(cur, &token_list) {
+ tep = list_entry(cur, struct token_entry, token_list);
+
+ if (!memcmp(&token, &tep->token, sizeof(token))) {
+ list_del(&tep->token_list);
+ found = tep;
+ break;
+ }
+ }
+ spin_unlock(&token_lock);
+
+ if (found != NULL) {
+ target_kvm = found->kvm;
+ kfree(found);
+
+ ikvm = kvmi_get(target_kvm);
+ if (ikvm) {
+ atomic_dec(&ikvm->num_tokens);
+ kvmi_put(target_kvm);
+ }
+ }
+
+ return target_kvm;
+}
+
+
+int kvmi_host_mem_map(struct kvm_vcpu *vcpu, gva_t tkn_gva,
+ gpa_t req_gpa, gpa_t map_gpa)
+{
+ int result = 0;
+ struct kvm *target_kvm;
+
+ gfn_t req_gfn;
+ hva_t req_hva;
+ struct mm_struct *req_mm;
+
+ gfn_t map_gfn;
+ hva_t map_hva;
+
+ kvm_debug("kvmi: mapping request req_gpa %016llx, map_gpa %016llx\n",
+ req_gpa, map_gpa);
+
+ /* get the struct kvm * corresponding to the token */
+ target_kvm = find_machine_at(vcpu, tkn_gva);
+ if (IS_ERR_VALUE(target_kvm)) {
+ return PTR_ERR(target_kvm);
+ } else if (target_kvm == NULL) {
+ kvm_err("kvmi: unable to find target machine\n");
+ return -KVM_ENOENT;
+ }
+ req_mm = target_kvm->mm;
+
+ /* translate source addresses */
+ req_gfn = gpa_to_gfn(req_gpa);
+ req_hva = gfn_to_hva_safe(target_kvm, req_gfn);
+ if (kvm_is_error_hva(req_hva)) {
+ kvm_err("kvmi: invalid req_gpa %016llx\n", req_gpa);
+ result = -KVM_EFAULT;
+ goto out;
+ }
+
+ kvm_debug("kvmi: req_gpa %016llx -> req_hva %016lx\n",
+ req_gpa, req_hva);
+
+ /* translate destination addresses */
+ map_gfn = gpa_to_gfn(map_gpa);
+ map_hva = gfn_to_hva_safe(vcpu->kvm, map_gfn);
+ if (kvm_is_error_hva(map_hva)) {
+ kvm_err("kvmi: invalid map_gpa %016llx\n", map_gpa);
+ result = -KVM_EFAULT;
+ goto out;
+ }
+
+ kvm_debug("kvmi: map_gpa %016llx -> map_hva %016lx\n",
+ map_gpa, map_hva);
+
+ /* actually do the mapping */
+ result = mm_remote_map(req_mm, req_hva, map_hva);
+ if (IS_ERR_VALUE((long)result)) {
+ if (result == -EBUSY)
+ kvm_debug("kvmi: mapping of req_gpa %016llx failed: %d.\n",
+ req_gpa, result);
+ else
+ kvm_err("kvmi: mapping of req_gpa %016llx failed: %d.\n",
+ req_gpa, result);
+ goto out;
+ }
+
+ /* all fine */
+ kvm_debug("kvmi: mapping of req_gpa %016llx successful\n", req_gpa);
+
+out:
+ kvm_put_kvm(target_kvm);
+
+ return result;
+}
+
+int kvmi_host_mem_unmap(struct kvm_vcpu *vcpu, gpa_t map_gpa)
+{
+ gfn_t map_gfn;
+ hva_t map_hva;
+ int result;
+
+ kvm_debug("kvmi: unmapping request for map_gpa %016llx\n", map_gpa);
+
+ /* convert GPA -> HVA */
+ map_gfn = gpa_to_gfn(map_gpa);
+ map_hva = gfn_to_hva_safe(vcpu->kvm, map_gfn);
+ if (kvm_is_error_hva(map_hva)) {
+ result = -KVM_EFAULT;
+ kvm_err("kvmi: invalid map_gpa %016llx\n", map_gpa);
+ goto out;
+ }
+
+ kvm_debug("kvmi: map_gpa %016llx -> map_hva %016lx\n",
+ map_gpa, map_hva);
+
+ /* actually do the unmapping */
+ result = mm_remote_unmap(map_hva);
+ if (IS_ERR_VALUE((long)result))
+ goto out;
+
+ kvm_debug("kvmi: unmapping of map_gpa %016llx successful\n", map_gpa);
+
+out:
+ return result;
+}
+
+void kvmi_mem_init(void)
+{
+ ktime_t expire;
+
+ INIT_LIST_HEAD(&token_list);
+ spin_lock_init(&token_lock);
+ INIT_WORK(&token_work, token_timeout_work);
+
+ hrtimer_init(&token_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+ token_timer.function = token_timer_fn;
+ expire = ktime_add_ns(ktime_get(), NSEC_PER_SEC);
+ hrtimer_start(&token_timer, expire, HRTIMER_MODE_ABS);
+
+ kvm_info("kvmi: initialized host memory introspection\n");
+}
+
+void kvmi_mem_exit(void)
+{
+ hrtimer_cancel(&token_timer);
+}
diff --git a/virt/kvm/kvmi_msg.c b/virt/kvm/kvmi_msg.c
index 3e381f95b686..a5f87aafa237 100644
--- a/virt/kvm/kvmi_msg.c
+++ b/virt/kvm/kvmi_msg.c
@@ -33,6 +33,7 @@ static const char *const msg_IDs[] = {
[KVMI_EVENT_REPLY] = "KVMI_EVENT_REPLY",
[KVMI_GET_CPUID] = "KVMI_GET_CPUID",
[KVMI_GET_GUEST_INFO] = "KVMI_GET_GUEST_INFO",
+ [KVMI_GET_MAP_TOKEN] = "KVMI_GET_MAP_TOKEN",
[KVMI_GET_MTRR_TYPE] = "KVMI_GET_MTRR_TYPE",
[KVMI_GET_PAGE_ACCESS] = "KVMI_GET_PAGE_ACCESS",
[KVMI_GET_PAGE_WRITE_BITMAP] = "KVMI_GET_PAGE_WRITE_BITMAP",
@@ -352,6 +353,19 @@ static int handle_write_physical(struct kvmi *ikvm,
return kvmi_msg_vm_maybe_reply(ikvm, msg, ec, NULL, 0);
}
+static int handle_get_map_token(struct kvmi *ikvm,
+ const struct kvmi_msg_hdr *msg,
+ const void *_req)
+{
+ struct kvmi_get_map_token_reply rpl;
+ int ec;
+
+ memset(&rpl, 0, sizeof(rpl));
+ ec = kvmi_cmd_alloc_token(ikvm->kvm, &rpl.token);
+
+ return kvmi_msg_vm_maybe_reply(ikvm, msg, ec, &rpl, sizeof(rpl));
+}
+
static bool enable_spp(struct kvmi *ikvm)
{
if (!ikvm->spp.initialized) {
@@ -524,6 +538,7 @@ static int(*const msg_vm[])(struct kvmi *, const struct kvmi_msg_hdr *,
[KVMI_CONTROL_SPP] = handle_control_spp,
[KVMI_CONTROL_VM_EVENTS] = handle_control_vm_events,
[KVMI_GET_GUEST_INFO] = handle_get_guest_info,
+ [KVMI_GET_MAP_TOKEN] = handle_get_map_token,
[KVMI_GET_PAGE_ACCESS] = handle_get_page_access,
[KVMI_GET_PAGE_WRITE_BITMAP] = handle_get_page_write_bitmap,
[KVMI_GET_VERSION] = handle_get_version,
next prev parent reply other threads:[~2019-08-09 16:04 UTC|newest]
Thread overview: 168+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-09 15:59 [RFC PATCH v6 00/92] VM introspection Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 01/92] kvm: introduce KVMI (VM introspection subsystem) Adalbert Lazăr
2019-08-12 20:20 ` Sean Christopherson
2019-08-13 9:11 ` Paolo Bonzini
2019-08-13 11:57 ` Adalbert Lazăr
[not found] ` <5d52a5ae.1c69fb81.5c260.1573SMTPIN_ADDED_BROKEN@mx.google.com>
2019-08-13 12:09 ` Paolo Bonzini
2019-08-13 15:01 ` Sean Christopherson
2019-08-13 21:03 ` Paolo Bonzini
2019-08-14 9:48 ` Adalbert Lazăr
[not found] ` <5d53d8d1.1c69fb81.7d32.0bedSMTPIN_ADDED_BROKEN@mx.google.com>
2019-08-14 10:37 ` Paolo Bonzini
2019-08-09 15:59 ` [RFC PATCH v6 02/92] kvm: introspection: add basic ioctls (hook/unhook) Adalbert Lazăr
2019-08-13 8:44 ` Paolo Bonzini
2019-08-13 14:24 ` Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 03/92] kvm: introspection: add permission access ioctls Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 04/92] kvm: introspection: add the read/dispatch message function Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 05/92] kvm: introspection: add KVMI_GET_VERSION Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 06/92] kvm: introspection: add KVMI_CONTROL_CMD_RESPONSE Adalbert Lazăr
2019-08-13 9:15 ` Paolo Bonzini
2019-08-13 17:08 ` Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 07/92] kvm: introspection: honor the reply option when handling the KVMI_GET_VERSION command Adalbert Lazăr
2019-08-13 9:16 ` Paolo Bonzini
2019-08-09 15:59 ` [RFC PATCH v6 08/92] kvm: introspection: add KVMI_CHECK_COMMAND and KVMI_CHECK_EVENT Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 09/92] kvm: introspection: add KVMI_GET_GUEST_INFO Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 10/92] kvm: introspection: add KVMI_CONTROL_VM_EVENTS Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 11/92] kvm: introspection: add vCPU related data Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 12/92] kvm: introspection: add a jobs list to every introspected vCPU Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 13/92] kvm: introspection: make the vCPU wait even when its jobs list is empty Adalbert Lazăr
2019-08-13 8:43 ` Paolo Bonzini
2019-08-13 14:19 ` Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 14/92] kvm: introspection: handle introspection commands before returning to guest Adalbert Lazăr
2019-08-13 8:26 ` Paolo Bonzini
2019-08-13 13:54 ` Adalbert Lazăr
[not found] ` <5d52c10e.1c69fb81.26904.fd34SMTPIN_ADDED_BROKEN@mx.google.com>
2019-08-13 14:45 ` Paolo Bonzini
2019-08-14 9:39 ` Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 15/92] kvm: introspection: handle vCPU related introspection commands Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 16/92] kvm: introspection: handle events and event replies Adalbert Lazăr
2019-08-13 8:55 ` Paolo Bonzini
2019-08-13 15:25 ` Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 17/92] kvm: introspection: introduce event actions Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 18/92] kvm: introspection: add KVMI_EVENT_UNHOOK Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 19/92] kvm: introspection: add KVMI_EVENT_CREATE_VCPU Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 20/92] kvm: introspection: add KVMI_GET_VCPU_INFO Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 21/92] kvm: page track: add track_create_slot() callback Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 22/92] kvm: x86: provide all page tracking hooks with the guest virtual address Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 23/92] kvm: page track: add support for preread, prewrite and preexec Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 24/92] kvm: x86: wire in the preread/prewrite/preexec page trackers Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 25/92] kvm: x86: intercept the write access on sidt and other emulated instructions Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 26/92] kvm: x86: add kvm_mmu_nested_pagefault() Adalbert Lazăr
2019-08-13 8:12 ` Paolo Bonzini
2019-08-09 15:59 ` [RFC PATCH v6 27/92] kvm: introspection: use page track Adalbert Lazăr
2019-08-13 9:06 ` Paolo Bonzini
2019-08-09 15:59 ` [RFC PATCH v6 28/92] kvm: x86: consult the page tracking from kvm_mmu_get_page() and __direct_map() Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 29/92] kvm: introspection: add KVMI_CONTROL_EVENTS Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 30/92] kvm: x86: add kvm_spt_fault() Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 31/92] kvm: introspection: add KVMI_EVENT_PF Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 32/92] kvm: introspection: add KVMI_GET_PAGE_ACCESS Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 33/92] kvm: introspection: add KVMI_SET_PAGE_ACCESS Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 34/92] Documentation: Introduce EPT based Subpage Protection Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 35/92] KVM: VMX: Add control flags for SPP enabling Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 36/92] KVM: VMX: Implement functions for SPPT paging setup Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 37/92] KVM: VMX: Introduce SPP access bitmap and operation functions Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 38/92] KVM: VMX: Add init/set/get functions for SPP Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 39/92] KVM: VMX: Introduce SPP user-space IOCTLs Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 40/92] KVM: VMX: Handle SPP induced vmexit and page fault Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 41/92] KVM: MMU: Enable Lazy mode SPPT setup Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 42/92] KVM: MMU: Handle host memory remapping and reclaim Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 43/92] kvm: introspection: add KVMI_CONTROL_SPP Adalbert Lazăr
2019-08-09 15:59 ` [RFC PATCH v6 44/92] kvm: introspection: extend the internal database of tracked pages with write_bitmap info Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 45/92] kvm: introspection: add KVMI_GET_PAGE_WRITE_BITMAP Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 46/92] kvm: introspection: add KVMI_SET_PAGE_WRITE_BITMAP Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 47/92] kvm: introspection: add KVMI_READ_PHYSICAL and KVMI_WRITE_PHYSICAL Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 48/92] kvm: add kvm_vcpu_kick_and_wait() Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 49/92] kvm: introspection: add KVMI_PAUSE_VCPU and KVMI_EVENT_PAUSE_VCPU Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 50/92] kvm: introspection: add KVMI_GET_REGISTERS Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 51/92] kvm: introspection: add KVMI_SET_REGISTERS Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 52/92] kvm: introspection: add KVMI_GET_CPUID Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 53/92] kvm: introspection: add KVMI_INJECT_EXCEPTION + KVMI_EVENT_TRAP Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 54/92] kvm: introspection: add KVMI_CONTROL_CR and KVMI_EVENT_CR Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 55/92] kvm: introspection: add KVMI_CONTROL_MSR and KVMI_EVENT_MSR Adalbert Lazăr
2019-08-12 21:05 ` Sean Christopherson
2019-08-15 6:36 ` Nicusor CITU
2019-08-19 18:36 ` Sean Christopherson
2019-08-20 8:44 ` Nicusor CITU
2019-08-20 11:43 ` Mihai Donțu
2019-08-21 15:18 ` Sean Christopherson
2019-08-19 18:52 ` Sean Christopherson
2019-08-09 16:00 ` [RFC PATCH v6 56/92] kvm: x86: block any attempt to disable MSR interception if tracked by introspection Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 57/92] kvm: introspection: add KVMI_GET_XSAVE Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 58/92] kvm: introspection: add KVMI_GET_MTRR_TYPE Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 59/92] kvm: introspection: add KVMI_EVENT_XSETBV Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 60/92] kvm: x86: add kvm_arch_vcpu_set_guest_debug() Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 61/92] kvm: introspection: add KVMI_EVENT_BREAKPOINT Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 62/92] kvm: introspection: add KVMI_EVENT_HYPERCALL Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 63/92] kvm: introspection: add KVMI_EVENT_DESCRIPTOR Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 64/92] kvm: introspection: add single-stepping Adalbert Lazăr
2019-08-12 20:50 ` Sean Christopherson
2019-08-13 12:51 ` Adalbert Lazăr
2019-08-14 12:36 ` Nicusor CITU
2019-08-14 12:53 ` Paolo Bonzini
2019-08-09 16:00 ` [RFC PATCH v6 65/92] kvm: introspection: add KVMI_EVENT_SINGLESTEP Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 66/92] kvm: introspection: add custom input when single-stepping a vCPU Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 67/92] kvm: introspection: use single stepping on unimplemented instructions Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 68/92] kvm: x86: emulate a guest page table walk on SPT violations due to A/D bit updates Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 69/92] kvm: x86: keep the page protected if tracked by the introspection tool Adalbert Lazăr
2019-09-10 14:26 ` Konrad Rzeszutek Wilk
2019-09-10 16:28 ` Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 70/92] kvm: x86: filter out access rights only when " Adalbert Lazăr
2019-08-13 9:08 ` Paolo Bonzini
2019-08-13 16:06 ` Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 71/92] mm: add support for remote mapping Adalbert Lazăr
2019-08-09 16:24 ` DANGER WILL ROBINSON, DANGER Matthew Wilcox
2019-08-13 9:29 ` Paolo Bonzini
2019-08-13 11:24 ` Matthew Wilcox
2019-08-13 12:02 ` Paolo Bonzini
2019-08-13 11:01 ` Adalbert Lazăr
2019-08-15 19:19 ` Jerome Glisse
2019-08-15 20:16 ` Jerome Glisse
2019-08-16 17:45 ` Jason Gunthorpe
2019-08-23 12:39 ` Mircea CIRJALIU - MELIU
2019-09-05 18:09 ` Jerome Glisse
2019-09-09 17:00 ` Paolo Bonzini
2019-09-10 7:49 ` Mircea CIRJALIU - MELIU
2019-10-02 19:27 ` Jerome Glisse
2019-10-02 13:46 ` Paolo Bonzini
2019-10-02 14:15 ` Jerome Glisse
2019-10-02 16:18 ` Paolo Bonzini
2019-10-02 17:04 ` Jerome Glisse
2019-10-02 20:10 ` Paolo Bonzini
2019-10-03 15:42 ` Jerome Glisse
2019-10-03 15:50 ` Paolo Bonzini
2019-10-03 16:42 ` Mircea CIRJALIU - MELIU
2019-10-03 18:31 ` Jerome Glisse
2019-10-03 19:38 ` Paolo Bonzini
2019-10-04 9:41 ` Mircea CIRJALIU - MELIU
2019-10-04 11:46 ` Paolo Bonzini
2019-10-03 16:36 ` Mircea CIRJALIU - MELIU
2019-08-09 16:00 ` [RFC PATCH v6 72/92] kvm: introspection: add memory map/unmap support on the guest side Adalbert Lazăr
2019-08-09 16:00 ` Adalbert Lazăr [this message]
2019-08-09 16:00 ` [RFC PATCH v6 74/92] kvm: x86: do not unconditionally patch the hypercall instruction during emulation Adalbert Lazăr
2019-08-13 9:20 ` Paolo Bonzini
2019-08-14 12:07 ` Adalbert Lazăr
[not found] ` <5d53f965.1c69fb81.cd952.035bSMTPIN_ADDED_BROKEN@mx.google.com>
2019-08-14 12:33 ` Paolo Bonzini
2019-08-09 16:00 ` [RFC PATCH v6 75/92] kvm: x86: disable gpa_available optimization in emulator_read_write_onepage() Adalbert Lazăr
2019-08-13 8:47 ` Paolo Bonzini
2019-08-13 14:33 ` Adalbert Lazăr
[not found] ` <5d52ca22.1c69fb81.4ceb8.e90bSMTPIN_ADDED_BROKEN@mx.google.com>
2019-08-13 14:35 ` Paolo Bonzini
2019-08-09 16:00 ` [RFC PATCH v6 76/92] kvm: x86: disable EPT A/D bits if introspection is present Adalbert Lazăr
2019-08-13 9:18 ` Paolo Bonzini
2019-08-09 16:00 ` [RFC PATCH v6 77/92] kvm: introspection: add trace functions Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 78/92] kvm: x86: add tracepoints for interrupt and exception injections Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 79/92] kvm: x86: emulate movsd xmm, m64 Adalbert Lazăr
2019-08-13 9:17 ` Paolo Bonzini
2019-08-09 16:00 ` [RFC PATCH v6 80/92] kvm: x86: emulate movss xmm, m32 Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 81/92] kvm: x86: emulate movq xmm, m64 Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 82/92] kvm: x86: emulate movq r, xmm Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 83/92] kvm: x86: emulate movd xmm, m32 Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 84/92] kvm: x86: enable the half part of movss, movsd, movups Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 85/92] kvm: x86: emulate lfence Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 86/92] kvm: x86: emulate xorpd xmm2/m128, xmm1 Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 87/92] kvm: x86: emulate xorps xmm/m128, xmm Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 88/92] kvm: x86: emulate fst/fstp m64fp Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 89/92] kvm: x86: make lock cmpxchg r, r/m atomic Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 90/92] kvm: x86: emulate lock cmpxchg8b atomically Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 91/92] kvm: x86: emulate lock cmpxchg16b m128 Adalbert Lazăr
2019-08-09 16:00 ` [RFC PATCH v6 92/92] kvm: x86: fallback to the single-step on multipage CMPXCHG emulation Adalbert Lazăr
2019-08-12 18:23 ` [RFC PATCH v6 00/92] VM introspection Sean Christopherson
2019-08-12 21:40 ` Sean Christopherson
2019-08-13 9:34 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190809160047.8319-74-alazar@bitdefender.com \
--to=alazar@bitdefender.com \
--cc=Zhang@kvack.org \
--cc=jan.kiszka@siemens.com \
--cc=konrad.wilk@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mathieu.tarral@protonmail.com \
--cc=mcirjaliu@bitdefender.com \
--cc=mdontu@bitdefender.com \
--cc=patrick.colp@oracle.com \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=samuel.lauren@iki.fi \
--cc=stefanha@redhat.com \
--cc=tamas@tklengyel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=weijiang.yang@intel.com \
--cc=yu.c.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).