From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BCDCC433FF for ; Fri, 9 Aug 2019 16:15:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 096EC2086A for ; Fri, 9 Aug 2019 16:15:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437036AbfHIQO6 (ORCPT ); Fri, 9 Aug 2019 12:14:58 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:52804 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407417AbfHIQO5 (ORCPT ); Fri, 9 Aug 2019 12:14:57 -0400 Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id E2678305D34A; Fri, 9 Aug 2019 19:01:19 +0300 (EEST) Received: from localhost.localdomain (unknown [89.136.169.210]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 7F899305B7A3; Fri, 9 Aug 2019 19:01:19 +0300 (EEST) From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , Zhang@vger.kernel.org, Yu C , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , =?UTF-8?q?Adalbert=20Laz=C4=83r?= Subject: [RFC PATCH v6 49/92] kvm: introspection: add KVMI_PAUSE_VCPU and KVMI_EVENT_PAUSE_VCPU Date: Fri, 9 Aug 2019 19:00:04 +0300 Message-Id: <20190809160047.8319-50-alazar@bitdefender.com> In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is the only vCPU command handled by the receiving worker. It increments a pause request counter and kicks the vCPU. This event is send by the vCPU thread, but has a low priority. It will be sent after any other vCPU introspection event and when no vCPU introspection command is queued. Signed-off-by: Adalbert Lazăr --- Documentation/virtual/kvm/kvmi.rst | 68 ++++++++++++++++++++++++++++++ include/uapi/linux/kvm_para.h | 1 + include/uapi/linux/kvmi.h | 7 +++ virt/kvm/kvmi.c | 65 ++++++++++++++++++++++++++++ virt/kvm/kvmi_int.h | 4 ++ virt/kvm/kvmi_msg.c | 61 +++++++++++++++++++++++++++ 6 files changed, 206 insertions(+) diff --git a/Documentation/virtual/kvm/kvmi.rst b/Documentation/virtual/kvm/kvmi.rst index eef32107837a..558d3eb6007f 100644 --- a/Documentation/virtual/kvm/kvmi.rst +++ b/Documentation/virtual/kvm/kvmi.rst @@ -820,6 +820,48 @@ one page (offset + size <= PAGE_SIZE). * -KVM_EINVAL - the specified gpa is invalid +16. KVMI_PAUSE_VCPU +------------------- + +:Architecture: all +:Versions: >= 1 +:Parameters: + + struct kvmi_vcpu_hdr; + struct kvmi_pause_vcpu { + __u8 wait; + __u8 padding1; + __u16 padding2; + __u32 padding3; + }; + +:Returns: + +:: + + struct kvmi_error_code; + +Kicks the vCPU from guest. + +If `wait` is 1, the command will wait for vCPU to acknowledge the IPI. + +The vCPU will handle the pending commands/events and send the +*KVMI_EVENT_PAUSE_VCPU* event (one for every successful *KVMI_PAUSE_VCPU* +command) before returning to guest. + +Please note that new vCPUs might by created at any time. +The introspection tool should use *KVMI_CONTROL_VM_EVENTS* to enable the +*KVMI_EVENT_CREATE_VCPU* event in order to stop these new vCPUs as well +(by delaying the event reply). + +:Errors: + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EINVAL - padding is not zero +* -KVM_EAGAIN - the selected vCPU can't be introspected yet +* -KVM_EBUSY - the selected vCPU has too many queued *KVMI_EVENT_PAUSE_VCPU* events +* -KVM_EPERM - the *KVMI_EVENT_PAUSE_VCPU* event is disallowed (see *KVMI_CONTROL_EVENTS*) + and the introspection tool expects a reply. Events ====== @@ -992,3 +1034,29 @@ The *RETRY* action is used by the introspector to retry the execution of the current instruction. Either using single-step (if ``singlestep`` is not zero) or return to guest (if the introspector changed the instruction pointer or the page restrictions). + +4. KVMI_EVENT_PAUSE_VCPU +------------------------ + +:Architectures: all +:Versions: >= 1 +:Actions: CONTINUE, CRASH +:Parameters: + +:: + + struct kvmi_event; + +:Returns: + +:: + + struct kvmi_vcpu_hdr; + struct kvmi_event_reply; + +This event is sent in response to a *KVMI_PAUSE_VCPU* command and +cannot be disabled via *KVMI_CONTROL_EVENTS*. + +This event has a low priority. It will be sent after any other vCPU +introspection event and when no vCPU introspection command is queued. + diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 54c0e20f5b64..07e3f2662b36 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -18,6 +18,7 @@ #define KVM_EPERM EPERM #define KVM_EOPNOTSUPP 95 #define KVM_EAGAIN 11 +#define KVM_EBUSY EBUSY #define KVM_ENOMEM ENOMEM #define KVM_HC_VAPIC_POLL_IRQ 1 diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index be3f066f314e..ca9c6b6aeed5 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -177,6 +177,13 @@ struct kvmi_get_vcpu_info_reply { __u64 tsc_speed; }; +struct kvmi_pause_vcpu { + __u8 wait; + __u8 padding1; + __u16 padding2; + __u32 padding3; +}; + struct kvmi_control_events { __u16 event_id; __u8 enable; diff --git a/virt/kvm/kvmi.c b/virt/kvm/kvmi.c index a84eb150e116..85de2da3eb7b 100644 --- a/virt/kvm/kvmi.c +++ b/virt/kvm/kvmi.c @@ -11,6 +11,8 @@ #include #include +#define MAX_PAUSE_REQUESTS 1001 + static struct kmem_cache *msg_cache; static struct kmem_cache *radix_cache; static struct kmem_cache *job_cache; @@ -1090,6 +1092,39 @@ static bool kvmi_create_vcpu_event(struct kvm_vcpu *vcpu) return ret; } +static bool __kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) +{ + u32 action; + bool ret = false; + + action = kvmi_msg_send_pause_vcpu(vcpu); + switch (action) { + case KVMI_EVENT_ACTION_CONTINUE: + ret = true; + break; + default: + kvmi_handle_common_event_actions(vcpu, action, "PAUSE"); + } + + return ret; +} + +static bool kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) +{ + struct kvmi *ikvm; + bool ret = true; + + ikvm = kvmi_get(vcpu->kvm); + if (!ikvm) + return true; + + ret = __kvmi_pause_vcpu_event(vcpu); + + kvmi_put(vcpu->kvm); + + return ret; +} + void kvmi_run_jobs(struct kvm_vcpu *vcpu) { struct kvmi_vcpu *ivcpu = IVCPU(vcpu); @@ -1154,6 +1189,7 @@ int kvmi_run_jobs_and_wait(struct kvm_vcpu *vcpu) void kvmi_handle_requests(struct kvm_vcpu *vcpu) { + struct kvmi_vcpu *ivcpu = IVCPU(vcpu); struct kvmi *ikvm; ikvm = kvmi_get(vcpu->kvm); @@ -1165,6 +1201,12 @@ void kvmi_handle_requests(struct kvm_vcpu *vcpu) if (err) break; + + if (!atomic_read(&ivcpu->pause_requests)) + break; + + atomic_dec(&ivcpu->pause_requests); + kvmi_pause_vcpu_event(vcpu); } kvmi_put(vcpu->kvm); @@ -1351,10 +1393,33 @@ int kvmi_cmd_control_vm_events(struct kvmi *ikvm, unsigned int event_id, return 0; } +int kvmi_cmd_pause_vcpu(struct kvm_vcpu *vcpu, bool wait) +{ + struct kvmi_vcpu *ivcpu = IVCPU(vcpu); + unsigned int req = KVM_REQ_INTROSPECTION; + + if (atomic_read(&ivcpu->pause_requests) > MAX_PAUSE_REQUESTS) + return -KVM_EBUSY; + + atomic_inc(&ivcpu->pause_requests); + kvm_make_request(req, vcpu); + if (wait) + kvm_vcpu_kick_and_wait(vcpu); + else + kvm_vcpu_kick(vcpu); + + return 0; +} + static void kvmi_job_abort(struct kvm_vcpu *vcpu, void *ctx) { struct kvmi_vcpu *ivcpu = IVCPU(vcpu); + /* + * The thread that might increment this atomic is stopped + * and this thread is the only one that could decrement it. + */ + atomic_set(&ivcpu->pause_requests, 0); ivcpu->reply_waiting = false; } diff --git a/virt/kvm/kvmi_int.h b/virt/kvm/kvmi_int.h index 7bdff70d4309..cb3b0ce87bc1 100644 --- a/virt/kvm/kvmi_int.h +++ b/virt/kvm/kvmi_int.h @@ -100,6 +100,8 @@ struct kvmi_vcpu { bool rep_complete; bool effective_rep_complete; + atomic_t pause_requests; + bool reply_waiting; struct kvmi_vcpu_reply reply; @@ -164,6 +166,7 @@ u32 kvmi_msg_send_pf(struct kvm_vcpu *vcpu, u64 gpa, u64 gva, u8 access, bool *singlestep, bool *rep_complete, u64 *ctx_addr, u8 *ctx, u32 *ctx_size); u32 kvmi_msg_send_create_vcpu(struct kvm_vcpu *vcpu); +u32 kvmi_msg_send_pause_vcpu(struct kvm_vcpu *vcpu); int kvmi_msg_send_unhook(struct kvmi *ikvm); /* kvmi.c */ @@ -185,6 +188,7 @@ int kvmi_cmd_control_events(struct kvm_vcpu *vcpu, unsigned int event_id, bool enable); int kvmi_cmd_control_vm_events(struct kvmi *ikvm, unsigned int event_id, bool enable); +int kvmi_cmd_pause_vcpu(struct kvm_vcpu *vcpu, bool wait); int kvmi_run_jobs_and_wait(struct kvm_vcpu *vcpu); int kvmi_add_job(struct kvm_vcpu *vcpu, void (*fct)(struct kvm_vcpu *vcpu, void *ctx), diff --git a/virt/kvm/kvmi_msg.c b/virt/kvm/kvmi_msg.c index 9c20a9cfda42..a4446eed354d 100644 --- a/virt/kvm/kvmi_msg.c +++ b/virt/kvm/kvmi_msg.c @@ -34,6 +34,7 @@ static const char *const msg_IDs[] = { [KVMI_GET_PAGE_WRITE_BITMAP] = "KVMI_GET_PAGE_WRITE_BITMAP", [KVMI_GET_VCPU_INFO] = "KVMI_GET_VCPU_INFO", [KVMI_GET_VERSION] = "KVMI_GET_VERSION", + [KVMI_PAUSE_VCPU] = "KVMI_PAUSE_VCPU", [KVMI_READ_PHYSICAL] = "KVMI_READ_PHYSICAL", [KVMI_SET_PAGE_ACCESS] = "KVMI_SET_PAGE_ACCESS", [KVMI_SET_PAGE_WRITE_BITMAP] = "KVMI_SET_PAGE_WRITE_BITMAP", @@ -457,6 +458,53 @@ static bool invalid_vcpu_hdr(const struct kvmi_vcpu_hdr *hdr) return hdr->padding1 || hdr->padding2; } +/* + * We handle this vCPU command on the receiving thread to make it easier + * for userspace to implement a 'pause VM' command. Usually, this is done + * by sending one 'pause vCPU' command for every vCPU. By handling the + * command here, the userspace can: + * - optimize, by not requesting a reply for the first N-1 vCPU's + * - consider the VM stopped once it receives the reply + * for the last 'pause vCPU' command + */ +static int handle_pause_vcpu(struct kvmi *ikvm, + const struct kvmi_msg_hdr *msg, + const void *_req) +{ + const struct kvmi_pause_vcpu *req = _req; + const struct kvmi_vcpu_hdr *cmd; + struct kvm_vcpu *vcpu = NULL; + int err; + + if (req->padding1 || req->padding2 || req->padding3) { + err = -KVM_EINVAL; + goto reply; + } + + cmd = (const struct kvmi_vcpu_hdr *) (msg + 1); + + if (invalid_vcpu_hdr(cmd)) { + err = -KVM_EINVAL; + goto reply; + } + + if (!is_event_allowed(ikvm, KVMI_EVENT_PAUSE_VCPU)) { + err = -KVM_EPERM; + + if (ikvm->cmd_reply_disabled) + return kvmi_msg_vm_reply(ikvm, msg, err, NULL, 0); + + goto reply; + } + + err = kvmi_get_vcpu(ikvm, cmd->vcpu, &vcpu); + if (!err) + err = kvmi_cmd_pause_vcpu(vcpu, req->wait == 1); + +reply: + return kvmi_msg_vm_maybe_reply(ikvm, msg, err, NULL, 0); +} + /* * These commands are executed on the receiving thread/worker. */ @@ -471,6 +519,7 @@ static int(*const msg_vm[])(struct kvmi *, const struct kvmi_msg_hdr *, [KVMI_GET_PAGE_ACCESS] = handle_get_page_access, [KVMI_GET_PAGE_WRITE_BITMAP] = handle_get_page_write_bitmap, [KVMI_GET_VERSION] = handle_get_version, + [KVMI_PAUSE_VCPU] = handle_pause_vcpu, [KVMI_READ_PHYSICAL] = handle_read_physical, [KVMI_SET_PAGE_ACCESS] = handle_set_page_access, [KVMI_SET_PAGE_WRITE_BITMAP] = handle_set_page_write_bitmap, @@ -966,3 +1015,15 @@ u32 kvmi_msg_send_create_vcpu(struct kvm_vcpu *vcpu) return action; } + +u32 kvmi_msg_send_pause_vcpu(struct kvm_vcpu *vcpu) +{ + int err, action; + + err = kvmi_send_event(vcpu, KVMI_EVENT_PAUSE_VCPU, NULL, 0, + NULL, 0, &action); + if (err) + return KVMI_EVENT_ACTION_CONTINUE; + + return action; +} From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE127C433FF for ; Fri, 9 Aug 2019 16:03:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A8F320C01 for ; Fri, 9 Aug 2019 16:03:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A8F320C01 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bitdefender.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 54D026B028D; Fri, 9 Aug 2019 12:01:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FEB86B028E; Fri, 9 Aug 2019 12:01:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C70D6B0290; Fri, 9 Aug 2019 12:01:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by kanga.kvack.org (Postfix) with ESMTP id D58C56B028D for ; Fri, 9 Aug 2019 12:01:22 -0400 (EDT) Received: by mail-wr1-f69.google.com with SMTP id t9so47062071wrx.9 for ; Fri, 09 Aug 2019 09:01:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=khHbQuGKrbgc35Hf3osopgoI7QNZap9DQLeHtbwotAw=; b=NrU/M83w621h6BjOpGMIwwTHtA3HYc9uWNJKrpuA+kHwmUUuP2ojMzvLLFyBRV9OuD lEHGBPVLQCXH6GsSPKvGF4kLh0L3/XxQvPFDE8izFCA8oxUJkNsiFwfviWD2+Ih3McOu 0S4IPAex7FR+XcCuuIoUgxdTp84C/IPJCXdU8OJimQRyFxgCQy3BBSqDreIfH6T3stBX ugV5vUwPsGIWwPSXFXtVCT9Ke6Dq3jxGNoEU7QaK+3X94/esWiCbHYVc3GJyLfKN0hPQ os4c/g5jnyeaM82jBt7zB2ZjzvPXNvjKRZOzVyGeDrbIaNS8X3/8fUfuCsMAYdSBPo+i w4sQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) smtp.mailfrom=alazar@bitdefender.com X-Gm-Message-State: APjAAAVZZd8KlXytQIU0Y2fu2zRkLvCmrl/x22Ua+74uK9pZs8pRyqQm sXXpGTQKSddBB8IOmM7wzIv5wbRDG2lV08YMMx7f9Tm7/DmQfrYNbUsU2FUzmTkLxShYvDKmwfG v4J72dSlWLFTAUc6kbuPOwgcOPBWVOlhmRqn8hxGwIj5hR/9Eh7LNguOi56Dap5ZrFQ== X-Received: by 2002:a7b:c118:: with SMTP id w24mr11935354wmi.100.1565366482423; Fri, 09 Aug 2019 09:01:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqwAfQi7dNIMNEihPDB3RxvUPCCN5YlCGwQLOgPd0ERHqrbhf96fgGTGEPB2p2clSGQM8d/u X-Received: by 2002:a7b:c118:: with SMTP id w24mr11935197wmi.100.1565366480553; Fri, 09 Aug 2019 09:01:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565366480; cv=none; d=google.com; s=arc-20160816; b=y1w4gFL1u8kvxNN9yT9dOHTqzmPL3RmXMbJDdYE/TglWT1lfHgkK+sVsHLIMeTvM5a n70gkWlsgo2ww84OUzVyEyhiZAUx0QaLcmcckIpV7mjzpH8tp1PyklgBdiWBz3UT28o/ DJYqsJBpJ4E90KQMCNS4UTLkT96ExwqRkXQiBwCMw3w9xhSRwUvHjSrCinj8FjUVNoHw aFTyp+5qpAxszpMppw4XMM0NtKi62xw0NKg5/tP0c3d2aIPUqP1zcxPo88hkeAlWuEd7 eQR8I9micKEzpyovsDPR4p2d7LjEO1w9/6pFItQ1ZuKcKcBx6bvjQfsatIOfgRosjJAX LuxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=khHbQuGKrbgc35Hf3osopgoI7QNZap9DQLeHtbwotAw=; b=QyQscNpIufdi9NsAgQ84HVJGDsNbeoZ6JwfpAffyphDUdi7B9iziQqfBaFhc2MEInl X16qtXvB48gd89MvO5CEJGhwNVd0dsYa/3opEFZ7bLYGYUx1Ucj2Gylw4QuTfTcU0cHM pkT1/JGwwCYTwoPwIKRovMbSBwexM2LIuVpohuandSfalsCxGy13Tp8AsoJQfWdV0Pp8 mMrdMboAWFaSILw1I0SWoWVi/T+OlArs6IyDfJRQnZJEFJDxQS+psnqLaHEn/NMZHsq9 VQA245ICsdzBDrtPN+u8H6J1VrhmA6HgXSkesYTB5qHaxjrYmZVlimT63yI60a2H2uNz EDrg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) smtp.mailfrom=alazar@bitdefender.com Received: from mx01.bbu.dsd.mx.bitdefender.com (mx01.bbu.dsd.mx.bitdefender.com. [91.199.104.161]) by mx.google.com with ESMTPS id l17si661126wrm.305.2019.08.09.09.01.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Aug 2019 09:01:20 -0700 (PDT) Received-SPF: pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) client-ip=91.199.104.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) smtp.mailfrom=alazar@bitdefender.com Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id E2678305D34A; Fri, 9 Aug 2019 19:01:19 +0300 (EEST) Received: from localhost.localdomain (unknown [89.136.169.210]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 7F899305B7A3; Fri, 9 Aug 2019 19:01:19 +0300 (EEST) From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , Zhang@kvack.org, Yu C , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , =?UTF-8?q?Adalbert=20Laz=C4=83r?= Subject: [RFC PATCH v6 49/92] kvm: introspection: add KVMI_PAUSE_VCPU and KVMI_EVENT_PAUSE_VCPU Date: Fri, 9 Aug 2019 19:00:04 +0300 Message-Id: <20190809160047.8319-50-alazar@bitdefender.com> In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the only vCPU command handled by the receiving worker. It increments a pause request counter and kicks the vCPU. This event is send by the vCPU thread, but has a low priority. It will be sent after any other vCPU introspection event and when no vCPU introspection command is queued. Signed-off-by: Adalbert Lazăr --- Documentation/virtual/kvm/kvmi.rst | 68 ++++++++++++++++++++++++++++++ include/uapi/linux/kvm_para.h | 1 + include/uapi/linux/kvmi.h | 7 +++ virt/kvm/kvmi.c | 65 ++++++++++++++++++++++++++++ virt/kvm/kvmi_int.h | 4 ++ virt/kvm/kvmi_msg.c | 61 +++++++++++++++++++++++++++ 6 files changed, 206 insertions(+) diff --git a/Documentation/virtual/kvm/kvmi.rst b/Documentation/virtual/kvm/kvmi.rst index eef32107837a..558d3eb6007f 100644 --- a/Documentation/virtual/kvm/kvmi.rst +++ b/Documentation/virtual/kvm/kvmi.rst @@ -820,6 +820,48 @@ one page (offset + size <= PAGE_SIZE). * -KVM_EINVAL - the specified gpa is invalid +16. KVMI_PAUSE_VCPU +------------------- + +:Architecture: all +:Versions: >= 1 +:Parameters: + + struct kvmi_vcpu_hdr; + struct kvmi_pause_vcpu { + __u8 wait; + __u8 padding1; + __u16 padding2; + __u32 padding3; + }; + +:Returns: + +:: + + struct kvmi_error_code; + +Kicks the vCPU from guest. + +If `wait` is 1, the command will wait for vCPU to acknowledge the IPI. + +The vCPU will handle the pending commands/events and send the +*KVMI_EVENT_PAUSE_VCPU* event (one for every successful *KVMI_PAUSE_VCPU* +command) before returning to guest. + +Please note that new vCPUs might by created at any time. +The introspection tool should use *KVMI_CONTROL_VM_EVENTS* to enable the +*KVMI_EVENT_CREATE_VCPU* event in order to stop these new vCPUs as well +(by delaying the event reply). + +:Errors: + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EINVAL - padding is not zero +* -KVM_EAGAIN - the selected vCPU can't be introspected yet +* -KVM_EBUSY - the selected vCPU has too many queued *KVMI_EVENT_PAUSE_VCPU* events +* -KVM_EPERM - the *KVMI_EVENT_PAUSE_VCPU* event is disallowed (see *KVMI_CONTROL_EVENTS*) + and the introspection tool expects a reply. Events ====== @@ -992,3 +1034,29 @@ The *RETRY* action is used by the introspector to retry the execution of the current instruction. Either using single-step (if ``singlestep`` is not zero) or return to guest (if the introspector changed the instruction pointer or the page restrictions). + +4. KVMI_EVENT_PAUSE_VCPU +------------------------ + +:Architectures: all +:Versions: >= 1 +:Actions: CONTINUE, CRASH +:Parameters: + +:: + + struct kvmi_event; + +:Returns: + +:: + + struct kvmi_vcpu_hdr; + struct kvmi_event_reply; + +This event is sent in response to a *KVMI_PAUSE_VCPU* command and +cannot be disabled via *KVMI_CONTROL_EVENTS*. + +This event has a low priority. It will be sent after any other vCPU +introspection event and when no vCPU introspection command is queued. + diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 54c0e20f5b64..07e3f2662b36 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -18,6 +18,7 @@ #define KVM_EPERM EPERM #define KVM_EOPNOTSUPP 95 #define KVM_EAGAIN 11 +#define KVM_EBUSY EBUSY #define KVM_ENOMEM ENOMEM #define KVM_HC_VAPIC_POLL_IRQ 1 diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index be3f066f314e..ca9c6b6aeed5 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -177,6 +177,13 @@ struct kvmi_get_vcpu_info_reply { __u64 tsc_speed; }; +struct kvmi_pause_vcpu { + __u8 wait; + __u8 padding1; + __u16 padding2; + __u32 padding3; +}; + struct kvmi_control_events { __u16 event_id; __u8 enable; diff --git a/virt/kvm/kvmi.c b/virt/kvm/kvmi.c index a84eb150e116..85de2da3eb7b 100644 --- a/virt/kvm/kvmi.c +++ b/virt/kvm/kvmi.c @@ -11,6 +11,8 @@ #include #include +#define MAX_PAUSE_REQUESTS 1001 + static struct kmem_cache *msg_cache; static struct kmem_cache *radix_cache; static struct kmem_cache *job_cache; @@ -1090,6 +1092,39 @@ static bool kvmi_create_vcpu_event(struct kvm_vcpu *vcpu) return ret; } +static bool __kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) +{ + u32 action; + bool ret = false; + + action = kvmi_msg_send_pause_vcpu(vcpu); + switch (action) { + case KVMI_EVENT_ACTION_CONTINUE: + ret = true; + break; + default: + kvmi_handle_common_event_actions(vcpu, action, "PAUSE"); + } + + return ret; +} + +static bool kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) +{ + struct kvmi *ikvm; + bool ret = true; + + ikvm = kvmi_get(vcpu->kvm); + if (!ikvm) + return true; + + ret = __kvmi_pause_vcpu_event(vcpu); + + kvmi_put(vcpu->kvm); + + return ret; +} + void kvmi_run_jobs(struct kvm_vcpu *vcpu) { struct kvmi_vcpu *ivcpu = IVCPU(vcpu); @@ -1154,6 +1189,7 @@ int kvmi_run_jobs_and_wait(struct kvm_vcpu *vcpu) void kvmi_handle_requests(struct kvm_vcpu *vcpu) { + struct kvmi_vcpu *ivcpu = IVCPU(vcpu); struct kvmi *ikvm; ikvm = kvmi_get(vcpu->kvm); @@ -1165,6 +1201,12 @@ void kvmi_handle_requests(struct kvm_vcpu *vcpu) if (err) break; + + if (!atomic_read(&ivcpu->pause_requests)) + break; + + atomic_dec(&ivcpu->pause_requests); + kvmi_pause_vcpu_event(vcpu); } kvmi_put(vcpu->kvm); @@ -1351,10 +1393,33 @@ int kvmi_cmd_control_vm_events(struct kvmi *ikvm, unsigned int event_id, return 0; } +int kvmi_cmd_pause_vcpu(struct kvm_vcpu *vcpu, bool wait) +{ + struct kvmi_vcpu *ivcpu = IVCPU(vcpu); + unsigned int req = KVM_REQ_INTROSPECTION; + + if (atomic_read(&ivcpu->pause_requests) > MAX_PAUSE_REQUESTS) + return -KVM_EBUSY; + + atomic_inc(&ivcpu->pause_requests); + kvm_make_request(req, vcpu); + if (wait) + kvm_vcpu_kick_and_wait(vcpu); + else + kvm_vcpu_kick(vcpu); + + return 0; +} + static void kvmi_job_abort(struct kvm_vcpu *vcpu, void *ctx) { struct kvmi_vcpu *ivcpu = IVCPU(vcpu); + /* + * The thread that might increment this atomic is stopped + * and this thread is the only one that could decrement it. + */ + atomic_set(&ivcpu->pause_requests, 0); ivcpu->reply_waiting = false; } diff --git a/virt/kvm/kvmi_int.h b/virt/kvm/kvmi_int.h index 7bdff70d4309..cb3b0ce87bc1 100644 --- a/virt/kvm/kvmi_int.h +++ b/virt/kvm/kvmi_int.h @@ -100,6 +100,8 @@ struct kvmi_vcpu { bool rep_complete; bool effective_rep_complete; + atomic_t pause_requests; + bool reply_waiting; struct kvmi_vcpu_reply reply; @@ -164,6 +166,7 @@ u32 kvmi_msg_send_pf(struct kvm_vcpu *vcpu, u64 gpa, u64 gva, u8 access, bool *singlestep, bool *rep_complete, u64 *ctx_addr, u8 *ctx, u32 *ctx_size); u32 kvmi_msg_send_create_vcpu(struct kvm_vcpu *vcpu); +u32 kvmi_msg_send_pause_vcpu(struct kvm_vcpu *vcpu); int kvmi_msg_send_unhook(struct kvmi *ikvm); /* kvmi.c */ @@ -185,6 +188,7 @@ int kvmi_cmd_control_events(struct kvm_vcpu *vcpu, unsigned int event_id, bool enable); int kvmi_cmd_control_vm_events(struct kvmi *ikvm, unsigned int event_id, bool enable); +int kvmi_cmd_pause_vcpu(struct kvm_vcpu *vcpu, bool wait); int kvmi_run_jobs_and_wait(struct kvm_vcpu *vcpu); int kvmi_add_job(struct kvm_vcpu *vcpu, void (*fct)(struct kvm_vcpu *vcpu, void *ctx), diff --git a/virt/kvm/kvmi_msg.c b/virt/kvm/kvmi_msg.c index 9c20a9cfda42..a4446eed354d 100644 --- a/virt/kvm/kvmi_msg.c +++ b/virt/kvm/kvmi_msg.c @@ -34,6 +34,7 @@ static const char *const msg_IDs[] = { [KVMI_GET_PAGE_WRITE_BITMAP] = "KVMI_GET_PAGE_WRITE_BITMAP", [KVMI_GET_VCPU_INFO] = "KVMI_GET_VCPU_INFO", [KVMI_GET_VERSION] = "KVMI_GET_VERSION", + [KVMI_PAUSE_VCPU] = "KVMI_PAUSE_VCPU", [KVMI_READ_PHYSICAL] = "KVMI_READ_PHYSICAL", [KVMI_SET_PAGE_ACCESS] = "KVMI_SET_PAGE_ACCESS", [KVMI_SET_PAGE_WRITE_BITMAP] = "KVMI_SET_PAGE_WRITE_BITMAP", @@ -457,6 +458,53 @@ static bool invalid_vcpu_hdr(const struct kvmi_vcpu_hdr *hdr) return hdr->padding1 || hdr->padding2; } +/* + * We handle this vCPU command on the receiving thread to make it easier + * for userspace to implement a 'pause VM' command. Usually, this is done + * by sending one 'pause vCPU' command for every vCPU. By handling the + * command here, the userspace can: + * - optimize, by not requesting a reply for the first N-1 vCPU's + * - consider the VM stopped once it receives the reply + * for the last 'pause vCPU' command + */ +static int handle_pause_vcpu(struct kvmi *ikvm, + const struct kvmi_msg_hdr *msg, + const void *_req) +{ + const struct kvmi_pause_vcpu *req = _req; + const struct kvmi_vcpu_hdr *cmd; + struct kvm_vcpu *vcpu = NULL; + int err; + + if (req->padding1 || req->padding2 || req->padding3) { + err = -KVM_EINVAL; + goto reply; + } + + cmd = (const struct kvmi_vcpu_hdr *) (msg + 1); + + if (invalid_vcpu_hdr(cmd)) { + err = -KVM_EINVAL; + goto reply; + } + + if (!is_event_allowed(ikvm, KVMI_EVENT_PAUSE_VCPU)) { + err = -KVM_EPERM; + + if (ikvm->cmd_reply_disabled) + return kvmi_msg_vm_reply(ikvm, msg, err, NULL, 0); + + goto reply; + } + + err = kvmi_get_vcpu(ikvm, cmd->vcpu, &vcpu); + if (!err) + err = kvmi_cmd_pause_vcpu(vcpu, req->wait == 1); + +reply: + return kvmi_msg_vm_maybe_reply(ikvm, msg, err, NULL, 0); +} + /* * These commands are executed on the receiving thread/worker. */ @@ -471,6 +519,7 @@ static int(*const msg_vm[])(struct kvmi *, const struct kvmi_msg_hdr *, [KVMI_GET_PAGE_ACCESS] = handle_get_page_access, [KVMI_GET_PAGE_WRITE_BITMAP] = handle_get_page_write_bitmap, [KVMI_GET_VERSION] = handle_get_version, + [KVMI_PAUSE_VCPU] = handle_pause_vcpu, [KVMI_READ_PHYSICAL] = handle_read_physical, [KVMI_SET_PAGE_ACCESS] = handle_set_page_access, [KVMI_SET_PAGE_WRITE_BITMAP] = handle_set_page_write_bitmap, @@ -966,3 +1015,15 @@ u32 kvmi_msg_send_create_vcpu(struct kvm_vcpu *vcpu) return action; } + +u32 kvmi_msg_send_pause_vcpu(struct kvm_vcpu *vcpu) +{ + int err, action; + + err = kvmi_send_event(vcpu, KVMI_EVENT_PAUSE_VCPU, NULL, 0, + NULL, 0, &action); + if (err) + return KVMI_EVENT_ACTION_CONTINUE; + + return action; +} From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= Subject: [RFC PATCH v6 49/92] kvm: introspection: add KVMI_PAUSE_VCPU and KVMI_EVENT_PAUSE_VCPU Date: Fri, 9 Aug 2019 19:00:04 +0300 Message-ID: <20190809160047.8319-50-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: kvm@vger.kernel.org Cc: Tamas K Lengyel , Weijiang Yang , Yu C , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Jan Kiszka , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Konrad Rzeszutek Wilk , virtualization@lists.linux-foundation.org, =?UTF-8?q?Adalbert=20Laz=C4=83r?= , linux-mm@kvack.org, Patrick Colp , Mathieu Tarral , Stefan Hajnoczi , Paolo Bonzini , Zhang@mail.linuxfoundation.org, =?UTF-8?q?Mihai=20Don=C8=9Bu?= List-Id: virtualization@lists.linuxfoundation.org VGhpcyBpcyB0aGUgb25seSB2Q1BVIGNvbW1hbmQgaGFuZGxlZCBieSB0aGUgcmVjZWl2aW5nIHdv cmtlci4KSXQgaW5jcmVtZW50cyBhIHBhdXNlIHJlcXVlc3QgY291bnRlciBhbmQga2lja3MgdGhl IHZDUFUuCgpUaGlzIGV2ZW50IGlzIHNlbmQgYnkgdGhlIHZDUFUgdGhyZWFkLCBidXQgaGFzIGEg bG93IHByaW9yaXR5LiBJdAp3aWxsIGJlIHNlbnQgYWZ0ZXIgYW55IG90aGVyIHZDUFUgaW50cm9z cGVjdGlvbiBldmVudCBhbmQgd2hlbiBubyB2Q1BVCmludHJvc3BlY3Rpb24gY29tbWFuZCBpcyBx dWV1ZWQuCgpTaWduZWQtb2ZmLWJ5OiBBZGFsYmVydCBMYXrEg3IgPGFsYXphckBiaXRkZWZlbmRl ci5jb20+Ci0tLQogRG9jdW1lbnRhdGlvbi92aXJ0dWFsL2t2bS9rdm1pLnJzdCB8IDY4ICsrKysr KysrKysrKysrKysrKysrKysrKysrKysrKwogaW5jbHVkZS91YXBpL2xpbnV4L2t2bV9wYXJhLmgg ICAgICB8ICAxICsKIGluY2x1ZGUvdWFwaS9saW51eC9rdm1pLmggICAgICAgICAgfCAgNyArKysK IHZpcnQva3ZtL2t2bWkuYyAgICAgICAgICAgICAgICAgICAgfCA2NSArKysrKysrKysrKysrKysr KysrKysrKysrKysrCiB2aXJ0L2t2bS9rdm1pX2ludC5oICAgICAgICAgICAgICAgIHwgIDQgKysK IHZpcnQva3ZtL2t2bWlfbXNnLmMgICAgICAgICAgICAgICAgfCA2MSArKysrKysrKysrKysrKysr KysrKysrKysrKysKIDYgZmlsZXMgY2hhbmdlZCwgMjA2IGluc2VydGlvbnMoKykKCmRpZmYgLS1n aXQgYS9Eb2N1bWVudGF0aW9uL3ZpcnR1YWwva3ZtL2t2bWkucnN0IGIvRG9jdW1lbnRhdGlvbi92 aXJ0dWFsL2t2bS9rdm1pLnJzdAppbmRleCBlZWYzMjEwNzgzN2EuLjU1OGQzZWI2MDA3ZiAxMDA2 NDQKLS0tIGEvRG9jdW1lbnRhdGlvbi92aXJ0dWFsL2t2bS9rdm1pLnJzdAorKysgYi9Eb2N1bWVu dGF0aW9uL3ZpcnR1YWwva3ZtL2t2bWkucnN0CkBAIC04MjAsNiArODIwLDQ4IEBAIG9uZSBwYWdl IChvZmZzZXQgKyBzaXplIDw9IFBBR0VfU0laRSkuCiAKICogLUtWTV9FSU5WQUwgLSB0aGUgc3Bl Y2lmaWVkIGdwYSBpcyBpbnZhbGlkCiAKKzE2LiBLVk1JX1BBVVNFX1ZDUFUKKy0tLS0tLS0tLS0t LS0tLS0tLS0KKworOkFyY2hpdGVjdHVyZTogYWxsCis6VmVyc2lvbnM6ID49IDEKKzpQYXJhbWV0 ZXJzOgorCisJc3RydWN0IGt2bWlfdmNwdV9oZHI7CisJc3RydWN0IGt2bWlfcGF1c2VfdmNwdSB7 CisJCV9fdTggd2FpdDsKKwkJX191OCBwYWRkaW5nMTsKKwkJX191MTYgcGFkZGluZzI7CisJCV9f dTMyIHBhZGRpbmczOworCX07CisKKzpSZXR1cm5zOgorCis6OgorCisJc3RydWN0IGt2bWlfZXJy b3JfY29kZTsKKworS2lja3MgdGhlIHZDUFUgZnJvbSBndWVzdC4KKworSWYgYHdhaXRgIGlzIDEs IHRoZSBjb21tYW5kIHdpbGwgd2FpdCBmb3IgdkNQVSB0byBhY2tub3dsZWRnZSB0aGUgSVBJLgor CitUaGUgdkNQVSB3aWxsIGhhbmRsZSB0aGUgcGVuZGluZyBjb21tYW5kcy9ldmVudHMgYW5kIHNl bmQgdGhlCisqS1ZNSV9FVkVOVF9QQVVTRV9WQ1BVKiBldmVudCAob25lIGZvciBldmVyeSBzdWNj ZXNzZnVsICpLVk1JX1BBVVNFX1ZDUFUqCitjb21tYW5kKSBiZWZvcmUgcmV0dXJuaW5nIHRvIGd1 ZXN0LgorCitQbGVhc2Ugbm90ZSB0aGF0IG5ldyB2Q1BVcyBtaWdodCBieSBjcmVhdGVkIGF0IGFu eSB0aW1lLgorVGhlIGludHJvc3BlY3Rpb24gdG9vbCBzaG91bGQgdXNlICpLVk1JX0NPTlRST0xf Vk1fRVZFTlRTKiB0byBlbmFibGUgdGhlCisqS1ZNSV9FVkVOVF9DUkVBVEVfVkNQVSogZXZlbnQg aW4gb3JkZXIgdG8gc3RvcCB0aGVzZSBuZXcgdkNQVXMgYXMgd2VsbAorKGJ5IGRlbGF5aW5nIHRo ZSBldmVudCByZXBseSkuCisKKzpFcnJvcnM6CisKKyogLUtWTV9FSU5WQUwgLSB0aGUgc2VsZWN0 ZWQgdkNQVSBpcyBpbnZhbGlkCisqIC1LVk1fRUlOVkFMIC0gcGFkZGluZyBpcyBub3QgemVybwor KiAtS1ZNX0VBR0FJTiAtIHRoZSBzZWxlY3RlZCB2Q1BVIGNhbid0IGJlIGludHJvc3BlY3RlZCB5 ZXQKKyogLUtWTV9FQlVTWSAgLSB0aGUgc2VsZWN0ZWQgdkNQVSBoYXMgdG9vIG1hbnkgcXVldWVk ICpLVk1JX0VWRU5UX1BBVVNFX1ZDUFUqIGV2ZW50cworKiAtS1ZNX0VQRVJNICAtIHRoZSAqS1ZN SV9FVkVOVF9QQVVTRV9WQ1BVKiBldmVudCBpcyBkaXNhbGxvd2VkIChzZWUgKktWTUlfQ09OVFJP TF9FVkVOVFMqKQorCQlhbmQgdGhlIGludHJvc3BlY3Rpb24gdG9vbCBleHBlY3RzIGEgcmVwbHku CiBFdmVudHMKID09PT09PQogCkBAIC05OTIsMyArMTAzNCwyOSBAQCBUaGUgKlJFVFJZKiBhY3Rp b24gaXMgdXNlZCBieSB0aGUgaW50cm9zcGVjdG9yIHRvIHJldHJ5IHRoZSBleGVjdXRpb24gb2YK IHRoZSBjdXJyZW50IGluc3RydWN0aW9uLiBFaXRoZXIgdXNpbmcgc2luZ2xlLXN0ZXAgKGlmIGBg c2luZ2xlc3RlcGBgIGlzCiBub3QgemVybykgb3IgcmV0dXJuIHRvIGd1ZXN0IChpZiB0aGUgaW50 cm9zcGVjdG9yIGNoYW5nZWQgdGhlIGluc3RydWN0aW9uCiBwb2ludGVyIG9yIHRoZSBwYWdlIHJl c3RyaWN0aW9ucykuCisKKzQuIEtWTUlfRVZFTlRfUEFVU0VfVkNQVQorLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tCisKKzpBcmNoaXRlY3R1cmVzOiBhbGwKKzpWZXJzaW9uczogPj0gMQorOkFjdGlv bnM6IENPTlRJTlVFLCBDUkFTSAorOlBhcmFtZXRlcnM6CisKKzo6CisKKwlzdHJ1Y3Qga3ZtaV9l dmVudDsKKworOlJldHVybnM6CisKKzo6CisKKwlzdHJ1Y3Qga3ZtaV92Y3B1X2hkcjsKKwlzdHJ1 Y3Qga3ZtaV9ldmVudF9yZXBseTsKKworVGhpcyBldmVudCBpcyBzZW50IGluIHJlc3BvbnNlIHRv IGEgKktWTUlfUEFVU0VfVkNQVSogY29tbWFuZCBhbmQKK2Nhbm5vdCBiZSBkaXNhYmxlZCB2aWEg KktWTUlfQ09OVFJPTF9FVkVOVFMqLgorCitUaGlzIGV2ZW50IGhhcyBhIGxvdyBwcmlvcml0eS4g SXQgd2lsbCBiZSBzZW50IGFmdGVyIGFueSBvdGhlciB2Q1BVCitpbnRyb3NwZWN0aW9uIGV2ZW50 IGFuZCB3aGVuIG5vIHZDUFUgaW50cm9zcGVjdGlvbiBjb21tYW5kIGlzIHF1ZXVlZC4KKwpkaWZm IC0tZ2l0IGEvaW5jbHVkZS91YXBpL2xpbnV4L2t2bV9wYXJhLmggYi9pbmNsdWRlL3VhcGkvbGlu dXgva3ZtX3BhcmEuaAppbmRleCA1NGMwZTIwZjViNjQuLjA3ZTNmMjY2MmIzNiAxMDA2NDQKLS0t IGEvaW5jbHVkZS91YXBpL2xpbnV4L2t2bV9wYXJhLmgKKysrIGIvaW5jbHVkZS91YXBpL2xpbnV4 L2t2bV9wYXJhLmgKQEAgLTE4LDYgKzE4LDcgQEAKICNkZWZpbmUgS1ZNX0VQRVJNCQlFUEVSTQog I2RlZmluZSBLVk1fRU9QTk9UU1VQUAkJOTUKICNkZWZpbmUgS1ZNX0VBR0FJTgkJMTEKKyNkZWZp bmUgS1ZNX0VCVVNZCQlFQlVTWQogI2RlZmluZSBLVk1fRU5PTUVNCQlFTk9NRU0KIAogI2RlZmlu ZSBLVk1fSENfVkFQSUNfUE9MTF9JUlEJCTEKZGlmZiAtLWdpdCBhL2luY2x1ZGUvdWFwaS9saW51 eC9rdm1pLmggYi9pbmNsdWRlL3VhcGkvbGludXgva3ZtaS5oCmluZGV4IGJlM2YwNjZmMzE0ZS4u Y2E5YzZiNmFlZWQ1IDEwMDY0NAotLS0gYS9pbmNsdWRlL3VhcGkvbGludXgva3ZtaS5oCisrKyBi L2luY2x1ZGUvdWFwaS9saW51eC9rdm1pLmgKQEAgLTE3Nyw2ICsxNzcsMTMgQEAgc3RydWN0IGt2 bWlfZ2V0X3ZjcHVfaW5mb19yZXBseSB7CiAJX191NjQgdHNjX3NwZWVkOwogfTsKIAorc3RydWN0 IGt2bWlfcGF1c2VfdmNwdSB7CisJX191OCB3YWl0OworCV9fdTggcGFkZGluZzE7CisJX191MTYg cGFkZGluZzI7CisJX191MzIgcGFkZGluZzM7Cit9OworCiBzdHJ1Y3Qga3ZtaV9jb250cm9sX2V2 ZW50cyB7CiAJX191MTYgZXZlbnRfaWQ7CiAJX191OCBlbmFibGU7CmRpZmYgLS1naXQgYS92aXJ0 L2t2bS9rdm1pLmMgYi92aXJ0L2t2bS9rdm1pLmMKaW5kZXggYTg0ZWIxNTBlMTE2Li44NWRlMmRh M2ViN2IgMTAwNjQ0Ci0tLSBhL3ZpcnQva3ZtL2t2bWkuYworKysgYi92aXJ0L2t2bS9rdm1pLmMK QEAgLTExLDYgKzExLDggQEAKICNpbmNsdWRlIDxsaW51eC9rdGhyZWFkLmg+CiAjaW5jbHVkZSA8 bGludXgvYml0bWFwLmg+CiAKKyNkZWZpbmUgTUFYX1BBVVNFX1JFUVVFU1RTIDEwMDEKKwogc3Rh dGljIHN0cnVjdCBrbWVtX2NhY2hlICptc2dfY2FjaGU7CiBzdGF0aWMgc3RydWN0IGttZW1fY2Fj aGUgKnJhZGl4X2NhY2hlOwogc3RhdGljIHN0cnVjdCBrbWVtX2NhY2hlICpqb2JfY2FjaGU7CkBA IC0xMDkwLDYgKzEwOTIsMzkgQEAgc3RhdGljIGJvb2wga3ZtaV9jcmVhdGVfdmNwdV9ldmVudChz dHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCiAJcmV0dXJuIHJldDsKIH0KIAorc3RhdGljIGJvb2wgX19r dm1pX3BhdXNlX3ZjcHVfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1KQoreworCXUzMiBhY3Rp b247CisJYm9vbCByZXQgPSBmYWxzZTsKKworCWFjdGlvbiA9IGt2bWlfbXNnX3NlbmRfcGF1c2Vf dmNwdSh2Y3B1KTsKKwlzd2l0Y2ggKGFjdGlvbikgeworCWNhc2UgS1ZNSV9FVkVOVF9BQ1RJT05f Q09OVElOVUU6CisJCXJldCA9IHRydWU7CisJCWJyZWFrOworCWRlZmF1bHQ6CisJCWt2bWlfaGFu ZGxlX2NvbW1vbl9ldmVudF9hY3Rpb25zKHZjcHUsIGFjdGlvbiwgIlBBVVNFIik7CisJfQorCisJ cmV0dXJuIHJldDsKK30KKworc3RhdGljIGJvb2wga3ZtaV9wYXVzZV92Y3B1X2V2ZW50KHN0cnVj dCBrdm1fdmNwdSAqdmNwdSkKK3sKKwlzdHJ1Y3Qga3ZtaSAqaWt2bTsKKwlib29sIHJldCA9IHRy dWU7CisKKwlpa3ZtID0ga3ZtaV9nZXQodmNwdS0+a3ZtKTsKKwlpZiAoIWlrdm0pCisJCXJldHVy biB0cnVlOworCisJcmV0ID0gX19rdm1pX3BhdXNlX3ZjcHVfZXZlbnQodmNwdSk7CisKKwlrdm1p X3B1dCh2Y3B1LT5rdm0pOworCisJcmV0dXJuIHJldDsKK30KKwogdm9pZCBrdm1pX3J1bl9qb2Jz KHN0cnVjdCBrdm1fdmNwdSAqdmNwdSkKIHsKIAlzdHJ1Y3Qga3ZtaV92Y3B1ICppdmNwdSA9IElW Q1BVKHZjcHUpOwpAQCAtMTE1NCw2ICsxMTg5LDcgQEAgaW50IGt2bWlfcnVuX2pvYnNfYW5kX3dh aXQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1KQogCiB2b2lkIGt2bWlfaGFuZGxlX3JlcXVlc3RzKHN0 cnVjdCBrdm1fdmNwdSAqdmNwdSkKIHsKKwlzdHJ1Y3Qga3ZtaV92Y3B1ICppdmNwdSA9IElWQ1BV KHZjcHUpOwogCXN0cnVjdCBrdm1pICppa3ZtOwogCiAJaWt2bSA9IGt2bWlfZ2V0KHZjcHUtPmt2 bSk7CkBAIC0xMTY1LDYgKzEyMDEsMTIgQEAgdm9pZCBrdm1pX2hhbmRsZV9yZXF1ZXN0cyhzdHJ1 Y3Qga3ZtX3ZjcHUgKnZjcHUpCiAKIAkJaWYgKGVycikKIAkJCWJyZWFrOworCisJCWlmICghYXRv bWljX3JlYWQoJml2Y3B1LT5wYXVzZV9yZXF1ZXN0cykpCisJCQlicmVhazsKKworCQlhdG9taWNf ZGVjKCZpdmNwdS0+cGF1c2VfcmVxdWVzdHMpOworCQlrdm1pX3BhdXNlX3ZjcHVfZXZlbnQodmNw dSk7CiAJfQogCiAJa3ZtaV9wdXQodmNwdS0+a3ZtKTsKQEAgLTEzNTEsMTAgKzEzOTMsMzMgQEAg aW50IGt2bWlfY21kX2NvbnRyb2xfdm1fZXZlbnRzKHN0cnVjdCBrdm1pICppa3ZtLCB1bnNpZ25l ZCBpbnQgZXZlbnRfaWQsCiAJcmV0dXJuIDA7CiB9CiAKK2ludCBrdm1pX2NtZF9wYXVzZV92Y3B1 KHN0cnVjdCBrdm1fdmNwdSAqdmNwdSwgYm9vbCB3YWl0KQoreworCXN0cnVjdCBrdm1pX3ZjcHUg Kml2Y3B1ID0gSVZDUFUodmNwdSk7CisJdW5zaWduZWQgaW50IHJlcSA9IEtWTV9SRVFfSU5UUk9T UEVDVElPTjsKKworCWlmIChhdG9taWNfcmVhZCgmaXZjcHUtPnBhdXNlX3JlcXVlc3RzKSA+IE1B WF9QQVVTRV9SRVFVRVNUUykKKwkJcmV0dXJuIC1LVk1fRUJVU1k7CisKKwlhdG9taWNfaW5jKCZp dmNwdS0+cGF1c2VfcmVxdWVzdHMpOworCWt2bV9tYWtlX3JlcXVlc3QocmVxLCB2Y3B1KTsKKwlp ZiAod2FpdCkKKwkJa3ZtX3ZjcHVfa2lja19hbmRfd2FpdCh2Y3B1KTsKKwllbHNlCisJCWt2bV92 Y3B1X2tpY2sodmNwdSk7CisKKwlyZXR1cm4gMDsKK30KKwogc3RhdGljIHZvaWQga3ZtaV9qb2Jf YWJvcnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCB2b2lkICpjdHgpCiB7CiAJc3RydWN0IGt2bWlf dmNwdSAqaXZjcHUgPSBJVkNQVSh2Y3B1KTsKIAorCS8qCisJICogVGhlIHRocmVhZCB0aGF0IG1p Z2h0IGluY3JlbWVudCB0aGlzIGF0b21pYyBpcyBzdG9wcGVkCisJICogYW5kIHRoaXMgdGhyZWFk IGlzIHRoZSBvbmx5IG9uZSB0aGF0IGNvdWxkIGRlY3JlbWVudCBpdC4KKwkgKi8KKwlhdG9taWNf c2V0KCZpdmNwdS0+cGF1c2VfcmVxdWVzdHMsIDApOwogCWl2Y3B1LT5yZXBseV93YWl0aW5nID0g ZmFsc2U7CiB9CiAKZGlmZiAtLWdpdCBhL3ZpcnQva3ZtL2t2bWlfaW50LmggYi92aXJ0L2t2bS9r dm1pX2ludC5oCmluZGV4IDdiZGZmNzBkNDMwOS4uY2IzYjBjZTg3YmMxIDEwMDY0NAotLS0gYS92 aXJ0L2t2bS9rdm1pX2ludC5oCisrKyBiL3ZpcnQva3ZtL2t2bWlfaW50LmgKQEAgLTEwMCw2ICsx MDAsOCBAQCBzdHJ1Y3Qga3ZtaV92Y3B1IHsKIAlib29sIHJlcF9jb21wbGV0ZTsKIAlib29sIGVm ZmVjdGl2ZV9yZXBfY29tcGxldGU7CiAKKwlhdG9taWNfdCBwYXVzZV9yZXF1ZXN0czsKKwogCWJv b2wgcmVwbHlfd2FpdGluZzsKIAlzdHJ1Y3Qga3ZtaV92Y3B1X3JlcGx5IHJlcGx5OwogCkBAIC0x NjQsNiArMTY2LDcgQEAgdTMyIGt2bWlfbXNnX3NlbmRfcGYoc3RydWN0IGt2bV92Y3B1ICp2Y3B1 LCB1NjQgZ3BhLCB1NjQgZ3ZhLCB1OCBhY2Nlc3MsCiAJCSAgICAgYm9vbCAqc2luZ2xlc3RlcCwg Ym9vbCAqcmVwX2NvbXBsZXRlLAogCQkgICAgIHU2NCAqY3R4X2FkZHIsIHU4ICpjdHgsIHUzMiAq Y3R4X3NpemUpOwogdTMyIGt2bWlfbXNnX3NlbmRfY3JlYXRlX3ZjcHUoc3RydWN0IGt2bV92Y3B1 ICp2Y3B1KTsKK3UzMiBrdm1pX21zZ19zZW5kX3BhdXNlX3ZjcHUoc3RydWN0IGt2bV92Y3B1ICp2 Y3B1KTsKIGludCBrdm1pX21zZ19zZW5kX3VuaG9vayhzdHJ1Y3Qga3ZtaSAqaWt2bSk7CiAKIC8q IGt2bWkuYyAqLwpAQCAtMTg1LDYgKzE4OCw3IEBAIGludCBrdm1pX2NtZF9jb250cm9sX2V2ZW50 cyhzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsIHVuc2lnbmVkIGludCBldmVudF9pZCwKIAkJCSAgICBi b29sIGVuYWJsZSk7CiBpbnQga3ZtaV9jbWRfY29udHJvbF92bV9ldmVudHMoc3RydWN0IGt2bWkg Kmlrdm0sIHVuc2lnbmVkIGludCBldmVudF9pZCwKIAkJCSAgICAgICBib29sIGVuYWJsZSk7Citp bnQga3ZtaV9jbWRfcGF1c2VfdmNwdShzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsIGJvb2wgd2FpdCk7 CiBpbnQga3ZtaV9ydW5fam9ic19hbmRfd2FpdChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpOwogaW50 IGt2bWlfYWRkX2pvYihzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsCiAJCSB2b2lkICgqZmN0KShzdHJ1 Y3Qga3ZtX3ZjcHUgKnZjcHUsIHZvaWQgKmN0eCksCmRpZmYgLS1naXQgYS92aXJ0L2t2bS9rdm1p X21zZy5jIGIvdmlydC9rdm0va3ZtaV9tc2cuYwppbmRleCA5YzIwYTljZmRhNDIuLmE0NDQ2ZWVk MzU0ZCAxMDA2NDQKLS0tIGEvdmlydC9rdm0va3ZtaV9tc2cuYworKysgYi92aXJ0L2t2bS9rdm1p X21zZy5jCkBAIC0zNCw2ICszNCw3IEBAIHN0YXRpYyBjb25zdCBjaGFyICpjb25zdCBtc2dfSURz W10gPSB7CiAJW0tWTUlfR0VUX1BBR0VfV1JJVEVfQklUTUFQXSA9ICJLVk1JX0dFVF9QQUdFX1dS SVRFX0JJVE1BUCIsCiAJW0tWTUlfR0VUX1ZDUFVfSU5GT10gICAgICAgICA9ICJLVk1JX0dFVF9W Q1BVX0lORk8iLAogCVtLVk1JX0dFVF9WRVJTSU9OXSAgICAgICAgICAgPSAiS1ZNSV9HRVRfVkVS U0lPTiIsCisJW0tWTUlfUEFVU0VfVkNQVV0gICAgICAgICAgICA9ICJLVk1JX1BBVVNFX1ZDUFUi LAogCVtLVk1JX1JFQURfUEhZU0lDQUxdICAgICAgICAgPSAiS1ZNSV9SRUFEX1BIWVNJQ0FMIiwK IAlbS1ZNSV9TRVRfUEFHRV9BQ0NFU1NdICAgICAgID0gIktWTUlfU0VUX1BBR0VfQUNDRVNTIiwK IAlbS1ZNSV9TRVRfUEFHRV9XUklURV9CSVRNQVBdID0gIktWTUlfU0VUX1BBR0VfV1JJVEVfQklU TUFQIiwKQEAgLTQ1Nyw2ICs0NTgsNTMgQEAgc3RhdGljIGJvb2wgaW52YWxpZF92Y3B1X2hkcihj b25zdCBzdHJ1Y3Qga3ZtaV92Y3B1X2hkciAqaGRyKQogCXJldHVybiBoZHItPnBhZGRpbmcxIHx8 IGhkci0+cGFkZGluZzI7CiB9CiAKKy8qCisgKiBXZSBoYW5kbGUgdGhpcyB2Q1BVIGNvbW1hbmQg b24gdGhlIHJlY2VpdmluZyB0aHJlYWQgdG8gbWFrZSBpdCBlYXNpZXIKKyAqIGZvciB1c2Vyc3Bh Y2UgdG8gaW1wbGVtZW50IGEgJ3BhdXNlIFZNJyBjb21tYW5kLiBVc3VhbGx5LCB0aGlzIGlzIGRv bmUKKyAqIGJ5IHNlbmRpbmcgb25lICdwYXVzZSB2Q1BVJyBjb21tYW5kIGZvciBldmVyeSB2Q1BV LiBCeSBoYW5kbGluZyB0aGUKKyAqIGNvbW1hbmQgaGVyZSwgdGhlIHVzZXJzcGFjZSBjYW46Cisg KiAgICAtIG9wdGltaXplLCBieSBub3QgcmVxdWVzdGluZyBhIHJlcGx5IGZvciB0aGUgZmlyc3Qg Ti0xIHZDUFUncworICogICAgLSBjb25zaWRlciB0aGUgVk0gc3RvcHBlZCBvbmNlIGl0IHJlY2Vp dmVzIHRoZSByZXBseQorICogICAgICBmb3IgdGhlIGxhc3QgJ3BhdXNlIHZDUFUnIGNvbW1hbmQK KyAqLworc3RhdGljIGludCBoYW5kbGVfcGF1c2VfdmNwdShzdHJ1Y3Qga3ZtaSAqaWt2bSwKKwkJ CSAgICAgY29uc3Qgc3RydWN0IGt2bWlfbXNnX2hkciAqbXNnLAorCQkJICAgICBjb25zdCB2b2lk ICpfcmVxKQoreworCWNvbnN0IHN0cnVjdCBrdm1pX3BhdXNlX3ZjcHUgKnJlcSA9IF9yZXE7CisJ Y29uc3Qgc3RydWN0IGt2bWlfdmNwdV9oZHIgKmNtZDsKKwlzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUg PSBOVUxMOworCWludCBlcnI7CisKKwlpZiAocmVxLT5wYWRkaW5nMSB8fCByZXEtPnBhZGRpbmcy IHx8IHJlcS0+cGFkZGluZzMpIHsKKwkJZXJyID0gLUtWTV9FSU5WQUw7CisJCWdvdG8gcmVwbHk7 CisJfQorCisJY21kID0gKGNvbnN0IHN0cnVjdCBrdm1pX3ZjcHVfaGRyICopIChtc2cgKyAxKTsK KworCWlmIChpbnZhbGlkX3ZjcHVfaGRyKGNtZCkpIHsKKwkJZXJyID0gLUtWTV9FSU5WQUw7CisJ CWdvdG8gcmVwbHk7CisJfQorCisJaWYgKCFpc19ldmVudF9hbGxvd2VkKGlrdm0sIEtWTUlfRVZF TlRfUEFVU0VfVkNQVSkpIHsKKwkJZXJyID0gLUtWTV9FUEVSTTsKKworCQlpZiAoaWt2bS0+Y21k X3JlcGx5X2Rpc2FibGVkKQorCQkJcmV0dXJuIGt2bWlfbXNnX3ZtX3JlcGx5KGlrdm0sIG1zZywg ZXJyLCBOVUxMLCAwKTsKKworCQlnb3RvIHJlcGx5OworCX0KKworCWVyciA9IGt2bWlfZ2V0X3Zj cHUoaWt2bSwgY21kLT52Y3B1LCAmdmNwdSk7CisJaWYgKCFlcnIpCisJCWVyciA9IGt2bWlfY21k X3BhdXNlX3ZjcHUodmNwdSwgcmVxLT53YWl0ID09IDEpOworCityZXBseToKKwlyZXR1cm4ga3Zt aV9tc2dfdm1fbWF5YmVfcmVwbHkoaWt2bSwgbXNnLCBlcnIsIE5VTEwsIDApOworfQorCiAvKgog ICogVGhlc2UgY29tbWFuZHMgYXJlIGV4ZWN1dGVkIG9uIHRoZSByZWNlaXZpbmcgdGhyZWFkL3dv cmtlci4KICAqLwpAQCAtNDcxLDYgKzUxOSw3IEBAIHN0YXRpYyBpbnQoKmNvbnN0IG1zZ192bVtd KShzdHJ1Y3Qga3ZtaSAqLCBjb25zdCBzdHJ1Y3Qga3ZtaV9tc2dfaGRyICosCiAJW0tWTUlfR0VU X1BBR0VfQUNDRVNTXSAgICAgICA9IGhhbmRsZV9nZXRfcGFnZV9hY2Nlc3MsCiAJW0tWTUlfR0VU X1BBR0VfV1JJVEVfQklUTUFQXSA9IGhhbmRsZV9nZXRfcGFnZV93cml0ZV9iaXRtYXAsCiAJW0tW TUlfR0VUX1ZFUlNJT05dICAgICAgICAgICA9IGhhbmRsZV9nZXRfdmVyc2lvbiwKKwlbS1ZNSV9Q QVVTRV9WQ1BVXSAgICAgICAgICAgID0gaGFuZGxlX3BhdXNlX3ZjcHUsCiAJW0tWTUlfUkVBRF9Q SFlTSUNBTF0gICAgICAgICA9IGhhbmRsZV9yZWFkX3BoeXNpY2FsLAogCVtLVk1JX1NFVF9QQUdF X0FDQ0VTU10gICAgICAgPSBoYW5kbGVfc2V0X3BhZ2VfYWNjZXNzLAogCVtLVk1JX1NFVF9QQUdF X1dSSVRFX0JJVE1BUF0gPSBoYW5kbGVfc2V0X3BhZ2Vfd3JpdGVfYml0bWFwLApAQCAtOTY2LDMg KzEwMTUsMTUgQEAgdTMyIGt2bWlfbXNnX3NlbmRfY3JlYXRlX3ZjcHUoc3RydWN0IGt2bV92Y3B1 ICp2Y3B1KQogCiAJcmV0dXJuIGFjdGlvbjsKIH0KKwordTMyIGt2bWlfbXNnX3NlbmRfcGF1c2Vf dmNwdShzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCit7CisJaW50IGVyciwgYWN0aW9uOworCisJZXJy ID0ga3ZtaV9zZW5kX2V2ZW50KHZjcHUsIEtWTUlfRVZFTlRfUEFVU0VfVkNQVSwgTlVMTCwgMCwK KwkJCSAgICAgIE5VTEwsIDAsICZhY3Rpb24pOworCWlmIChlcnIpCisJCXJldHVybiBLVk1JX0VW RU5UX0FDVElPTl9DT05USU5VRTsKKworCXJldHVybiBhY3Rpb247Cit9Cl9fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClZpcnR1YWxpemF0aW9uIG1haWxpbmcg bGlzdApWaXJ0dWFsaXphdGlvbkBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9yZwpodHRwczovL2xp c3RzLmxpbnV4Zm91bmRhdGlvbi5vcmcvbWFpbG1hbi9saXN0aW5mby92aXJ0dWFsaXphdGlvbg==