From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EA8AC31E40 for ; Fri, 9 Aug 2019 16:16:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1BD562086A for ; Fri, 9 Aug 2019 16:16:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437135AbfHIQPE (ORCPT ); Fri, 9 Aug 2019 12:15:04 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:52914 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437015AbfHIQPC (ORCPT ); Fri, 9 Aug 2019 12:15:02 -0400 Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 545FA305D35F; Fri, 9 Aug 2019 19:01:39 +0300 (EEST) Received: from localhost.localdomain (unknown [89.136.169.210]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 026BE305B7A0; Fri, 9 Aug 2019 19:01:38 +0300 (EEST) From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , Zhang@vger.kernel.org, Yu C , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , =?UTF-8?q?Adalbert=20Laz=C4=83r?= , =?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= , =?UTF-8?q?Mircea=20C=C3=AErjaliu?= , Marian Rotariu Subject: [RFC PATCH v6 77/92] kvm: introspection: add trace functions Date: Fri, 9 Aug 2019 19:00:32 +0300 Message-Id: <20190809160047.8319-78-alazar@bitdefender.com> In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Co-developed-by: Nicușor Cîțu Signed-off-by: Nicușor Cîțu Co-developed-by: Mircea Cîrjaliu Signed-off-by: Mircea Cîrjaliu Co-developed-by: Marian Rotariu Signed-off-by: Marian Rotariu Co-developed-by: Adalbert Lazăr Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/kvmi.c | 63 ++++ include/trace/events/kvmi.h | 680 ++++++++++++++++++++++++++++++++++++ virt/kvm/kvmi.c | 20 ++ virt/kvm/kvmi_mem.c | 5 + virt/kvm/kvmi_msg.c | 16 + 5 files changed, 784 insertions(+) create mode 100644 include/trace/events/kvmi.h diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 5312f179af9c..171e76449271 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -9,6 +9,8 @@ #include #include "../../../virt/kvm/kvmi_int.h" +#include + static unsigned long *msr_mask(struct kvm_vcpu *vcpu, unsigned int *msr) { switch (*msr) { @@ -102,6 +104,9 @@ static bool __kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr) if (old_msr.data == msr->data) return true; + trace_kvmi_event_msr_send(vcpu->vcpu_id, msr->index, old_msr.data, + msr->data); + action = kvmi_send_msr(vcpu, msr->index, old_msr.data, msr->data, &ret_value); switch (action) { @@ -113,6 +118,8 @@ static bool __kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr) kvmi_handle_common_event_actions(vcpu, action, "MSR"); } + trace_kvmi_event_msr_recv(vcpu->vcpu_id, action, ret_value); + return ret; } @@ -387,6 +394,8 @@ static bool __kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr, if (!test_bit(cr, IVCPU(vcpu)->cr_mask)) return true; + trace_kvmi_event_cr_send(vcpu->vcpu_id, cr, old_value, *new_value); + action = kvmi_send_cr(vcpu, cr, old_value, *new_value, &ret_value); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -397,6 +406,8 @@ static bool __kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr, kvmi_handle_common_event_actions(vcpu, action, "CR"); } + trace_kvmi_event_cr_recv(vcpu->vcpu_id, action, ret_value); + return ret; } @@ -437,6 +448,8 @@ static void __kvmi_xsetbv_event(struct kvm_vcpu *vcpu) { u32 action; + trace_kvmi_event_xsetbv_send(vcpu->vcpu_id); + action = kvmi_send_xsetbv(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -444,6 +457,8 @@ static void __kvmi_xsetbv_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "XSETBV"); } + + trace_kvmi_event_xsetbv_recv(vcpu->vcpu_id, action); } void kvmi_xsetbv_event(struct kvm_vcpu *vcpu) @@ -460,12 +475,26 @@ void kvmi_xsetbv_event(struct kvm_vcpu *vcpu) kvmi_put(vcpu->kvm); } +static u64 get_next_rip(struct kvm_vcpu *vcpu) +{ + struct kvmi_vcpu *ivcpu = IVCPU(vcpu); + + if (ivcpu->have_delayed_regs) + return ivcpu->delayed_regs.rip; + else + return kvm_rip_read(vcpu); +} + void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len) { u32 action; u64 gpa; + u64 old_rip; gpa = kvm_mmu_gva_to_gpa_system(vcpu, gva, 0, NULL); + old_rip = kvm_rip_read(vcpu); + + trace_kvmi_event_bp_send(vcpu->vcpu_id, gpa, old_rip); action = kvmi_msg_send_bp(vcpu, gpa, insn_len); switch (action) { @@ -478,6 +507,8 @@ void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len) default: kvmi_handle_common_event_actions(vcpu, action, "BP"); } + + trace_kvmi_event_bp_recv(vcpu->vcpu_id, action, get_next_rip(vcpu)); } #define KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT 24 @@ -504,6 +535,8 @@ void kvmi_arch_hypercall_event(struct kvm_vcpu *vcpu) { u32 action; + trace_kvmi_event_hc_send(vcpu->vcpu_id); + action = kvmi_msg_send_hypercall(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -511,6 +544,8 @@ void kvmi_arch_hypercall_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "HYPERCALL"); } + + trace_kvmi_event_hc_recv(vcpu->vcpu_id, action); } bool kvmi_arch_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, @@ -532,6 +567,9 @@ bool kvmi_arch_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, if (ivcpu->effective_rep_complete) return true; + trace_kvmi_event_pf_send(vcpu->vcpu_id, gpa, gva, access, + kvm_rip_read(vcpu)); + action = kvmi_msg_send_pf(vcpu, gpa, gva, access, &ivcpu->ss_requested, &ivcpu->rep_complete, &ctx_addr, ivcpu->ctx_data, &ctx_size); @@ -553,6 +591,9 @@ bool kvmi_arch_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, kvmi_handle_common_event_actions(vcpu, action, "PF"); } + trace_kvmi_event_pf_recv(vcpu->vcpu_id, action, get_next_rip(vcpu), + ctx_size, ivcpu->ss_requested, ret); + return ret; } @@ -628,6 +669,11 @@ void kvmi_arch_trap_event(struct kvm_vcpu *vcpu) err = 0; } + trace_kvmi_event_trap_send(vcpu->vcpu_id, vector, + IVCPU(vcpu)->exception.nr, + err, IVCPU(vcpu)->exception.error_code, + vcpu->arch.cr2); + action = kvmi_send_trap(vcpu, vector, type, err, vcpu->arch.cr2); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -635,6 +681,8 @@ void kvmi_arch_trap_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "TRAP"); } + + trace_kvmi_event_trap_recv(vcpu->vcpu_id, action); } static bool __kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, @@ -643,6 +691,8 @@ static bool __kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, u32 action; bool ret = false; + trace_kvmi_event_desc_send(vcpu->vcpu_id, descriptor, write); + action = kvmi_msg_send_descriptor(vcpu, descriptor, write); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -654,6 +704,8 @@ static bool __kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, kvmi_handle_common_event_actions(vcpu, action, "DESC"); } + trace_kvmi_event_desc_recv(vcpu->vcpu_id, action); + return ret; } @@ -718,6 +770,15 @@ int kvmi_arch_cmd_inject_exception(struct kvm_vcpu *vcpu, u8 vector, bool error_code_valid, u32 error_code, u64 address) { + struct x86_exception e = { + .error_code_valid = error_code_valid, + .error_code = error_code, + .address = address, + .vector = vector, + }; + + trace_kvmi_cmd_inject_exception(vcpu, &e); + if (!(is_vector_valid(vector) && is_gva_valid(vcpu, address))) return -KVM_EINVAL; @@ -876,6 +937,8 @@ void kvmi_arch_update_page_tracking(struct kvm *kvm, return; } + trace_kvmi_set_gfn_access(m->gfn, m->access, m->write_bitmap, slot->id); + for (i = 0; i < ARRAY_SIZE(track_modes); i++) { unsigned int allow_bit = track_modes[i].allow_bit; enum kvm_page_track_mode mode = track_modes[i].track_mode; diff --git a/include/trace/events/kvmi.h b/include/trace/events/kvmi.h new file mode 100644 index 000000000000..442189437fe7 --- /dev/null +++ b/include/trace/events/kvmi.h @@ -0,0 +1,680 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM kvmi + +#if !defined(_TRACE_KVMI_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_KVMI_H + +#include + +#ifndef __TRACE_KVMI_STRUCTURES +#define __TRACE_KVMI_STRUCTURES + +#undef EN +#define EN(x) { x, #x } + +static const struct trace_print_flags kvmi_msg_id_symbol[] = { + EN(KVMI_GET_VERSION), + EN(KVMI_CHECK_COMMAND), + EN(KVMI_CHECK_EVENT), + EN(KVMI_GET_GUEST_INFO), + EN(KVMI_GET_VCPU_INFO), + EN(KVMI_GET_REGISTERS), + EN(KVMI_SET_REGISTERS), + EN(KVMI_GET_PAGE_ACCESS), + EN(KVMI_SET_PAGE_ACCESS), + EN(KVMI_GET_PAGE_WRITE_BITMAP), + EN(KVMI_SET_PAGE_WRITE_BITMAP), + EN(KVMI_INJECT_EXCEPTION), + EN(KVMI_READ_PHYSICAL), + EN(KVMI_WRITE_PHYSICAL), + EN(KVMI_GET_MAP_TOKEN), + EN(KVMI_CONTROL_EVENTS), + EN(KVMI_CONTROL_CR), + EN(KVMI_CONTROL_MSR), + EN(KVMI_EVENT), + EN(KVMI_EVENT_REPLY), + EN(KVMI_GET_CPUID), + EN(KVMI_GET_XSAVE), + EN(KVMI_PAUSE_VCPU), + EN(KVMI_CONTROL_VM_EVENTS), + EN(KVMI_GET_MTRR_TYPE), + EN(KVMI_CONTROL_SPP), + EN(KVMI_CONTROL_CMD_RESPONSE), + {-1, NULL} +}; + +static const struct trace_print_flags kvmi_descriptor_symbol[] = { + EN(KVMI_DESC_IDTR), + EN(KVMI_DESC_GDTR), + EN(KVMI_DESC_LDTR), + EN(KVMI_DESC_TR), + {-1, NULL} +}; + +static const struct trace_print_flags kvmi_event_symbol[] = { + EN(KVMI_EVENT_UNHOOK), + EN(KVMI_EVENT_CR), + EN(KVMI_EVENT_MSR), + EN(KVMI_EVENT_XSETBV), + EN(KVMI_EVENT_BREAKPOINT), + EN(KVMI_EVENT_HYPERCALL), + EN(KVMI_EVENT_PF), + EN(KVMI_EVENT_TRAP), + EN(KVMI_EVENT_DESCRIPTOR), + EN(KVMI_EVENT_CREATE_VCPU), + EN(KVMI_EVENT_PAUSE_VCPU), + EN(KVMI_EVENT_SINGLESTEP), + { -1, NULL } +}; + +static const struct trace_print_flags kvmi_action_symbol[] = { + {KVMI_EVENT_ACTION_CONTINUE, "continue"}, + {KVMI_EVENT_ACTION_RETRY, "retry"}, + {KVMI_EVENT_ACTION_CRASH, "crash"}, + {-1, NULL} +}; + +#endif /* __TRACE_KVMI_STRUCTURES */ + +TRACE_EVENT( + kvmi_vm_command, + TP_PROTO(__u16 id, __u32 seq), + TP_ARGS(id, seq), + TP_STRUCT__entry( + __field(__u16, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("%s seq %d", + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq) +); + +TRACE_EVENT( + kvmi_vm_reply, + TP_PROTO(__u16 id, __u32 seq, __s32 err), + TP_ARGS(id, seq, err), + TP_STRUCT__entry( + __field(__u16, id) + __field(__u32, seq) + __field(__s32, err) + ), + TP_fast_assign( + __entry->id = id; + __entry->seq = seq; + __entry->err = err; + ), + TP_printk("%s seq %d err %d", + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq, + __entry->err) +); + +TRACE_EVENT( + kvmi_vcpu_command, + TP_PROTO(__u16 vcpu, __u16 id, __u32 seq), + TP_ARGS(vcpu, id, seq), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u16, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("vcpu %d %s seq %d", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq) +); + +TRACE_EVENT( + kvmi_vcpu_reply, + TP_PROTO(__u16 vcpu, __u16 id, __u32 seq, __s32 err), + TP_ARGS(vcpu, id, seq, err), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u16, id) + __field(__u32, seq) + __field(__s32, err) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->id = id; + __entry->seq = seq; + __entry->err = err; + ), + TP_printk("vcpu %d %s seq %d err %d", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq, + __entry->err) +); + +TRACE_EVENT( + kvmi_event, + TP_PROTO(__u16 vcpu, __u32 id, __u32 seq), + TP_ARGS(vcpu, id, seq), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("vcpu %d %s seq %d", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->id, kvmi_event_symbol), + __entry->seq) +); + +TRACE_EVENT( + kvmi_event_reply, + TP_PROTO(__u32 id, __u32 seq), + TP_ARGS(id, seq), + TP_STRUCT__entry( + __field(__u32, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("%s seq %d", + trace_print_symbols_seq(p, __entry->id, kvmi_event_symbol), + __entry->seq) +); + +#define KVMI_ACCESS_PRINTK() ({ \ + const char *saved_ptr = trace_seq_buffer_ptr(p); \ + static const char * const access_str[] = { \ + "---", "r--", "-w-", "rw-", "--x", "r-x", "-wx", "rwx" \ + }; \ + trace_seq_printf(p, "%s", access_str[__entry->access & 7]); \ + saved_ptr; \ +}) + +TRACE_EVENT( + kvmi_set_gfn_access, + TP_PROTO(__u64 gfn, __u8 access, __u32 bitmap, __u16 slot), + TP_ARGS(gfn, access, bitmap, slot), + TP_STRUCT__entry( + __field(__u64, gfn) + __field(__u8, access) + __field(__u32, bitmap) + __field(__u16, slot) + ), + TP_fast_assign( + __entry->gfn = gfn; + __entry->access = access; + __entry->bitmap = bitmap; + __entry->slot = slot; + ), + TP_printk("gfn %llx %s write bitmap %x slot %d", + __entry->gfn, KVMI_ACCESS_PRINTK(), + __entry->bitmap, __entry->slot) +); + +DECLARE_EVENT_CLASS( + kvmi_event_send_template, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu), + TP_STRUCT__entry( + __field(__u16, vcpu) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + ), + TP_printk("vcpu %d", + __entry->vcpu + ) +); +DECLARE_EVENT_CLASS( + kvmi_event_recv_template, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + ), + TP_printk("vcpu %d %s", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol) + ) +); + +TRACE_EVENT( + kvmi_event_cr_send, + TP_PROTO(__u16 vcpu, __u32 cr, __u64 old_value, __u64 new_value), + TP_ARGS(vcpu, cr, old_value, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, cr) + __field(__u64, old_value) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->cr = cr; + __entry->old_value = old_value; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d cr %x old_value %llx new_value %llx", + __entry->vcpu, + __entry->cr, + __entry->old_value, + __entry->new_value + ) +); +TRACE_EVENT( + kvmi_event_cr_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 new_value), + TP_ARGS(vcpu, action, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d %s new_value %llx", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->new_value + ) +); + +TRACE_EVENT( + kvmi_event_msr_send, + TP_PROTO(__u16 vcpu, __u32 msr, __u64 old_value, __u64 new_value), + TP_ARGS(vcpu, msr, old_value, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, msr) + __field(__u64, old_value) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->msr = msr; + __entry->old_value = old_value; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d msr %x old_value %llx new_value %llx", + __entry->vcpu, + __entry->msr, + __entry->old_value, + __entry->new_value + ) +); +TRACE_EVENT( + kvmi_event_msr_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 new_value), + TP_ARGS(vcpu, action, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d %s new_value %llx", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->new_value + ) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_xsetbv_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_xsetbv_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_event_bp_send, + TP_PROTO(__u16 vcpu, __u64 gpa, __u64 old_rip), + TP_ARGS(vcpu, gpa, old_rip), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u64, gpa) + __field(__u64, old_rip) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->gpa = gpa; + __entry->old_rip = old_rip; + ), + TP_printk("vcpu %d gpa %llx rip %llx", + __entry->vcpu, + __entry->gpa, + __entry->old_rip + ) +); +TRACE_EVENT( + kvmi_event_bp_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 new_rip), + TP_ARGS(vcpu, action, new_rip), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, new_rip) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->new_rip = new_rip; + ), + TP_printk("vcpu %d %s rip %llx", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->new_rip + ) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_hc_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_hc_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_event_pf_send, + TP_PROTO(__u16 vcpu, __u64 gpa, __u64 gva, __u8 access, __u64 rip), + TP_ARGS(vcpu, gpa, gva, access, rip), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u64, gpa) + __field(__u64, gva) + __field(__u8, access) + __field(__u64, rip) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->gpa = gpa; + __entry->gva = gva; + __entry->access = access; + __entry->rip = rip; + ), + TP_printk("vcpu %d gpa %llx %s gva %llx rip %llx", + __entry->vcpu, + __entry->gpa, + KVMI_ACCESS_PRINTK(), + __entry->gva, + __entry->rip + ) +); +TRACE_EVENT( + kvmi_event_pf_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 next_rip, size_t custom_data, + bool singlestep, bool ret), + TP_ARGS(vcpu, action, next_rip, custom_data, singlestep, ret), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, next_rip) + __field(size_t, custom_data) + __field(bool, singlestep) + __field(bool, ret) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->next_rip = next_rip; + __entry->custom_data = custom_data; + __entry->singlestep = singlestep; + __entry->ret = ret; + ), + TP_printk("vcpu %d %s rip %llx custom %zu %s", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->next_rip, __entry->custom_data, + (__entry->singlestep ? (__entry->ret ? "singlestep failed" : + "singlestep running") + : "") + ) +); + +TRACE_EVENT( + kvmi_event_trap_send, + TP_PROTO(__u16 vcpu, __u32 vector, __u8 nr, __u32 err, __u16 error_code, + __u64 cr2), + TP_ARGS(vcpu, vector, nr, err, error_code, cr2), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, vector) + __field(__u8, nr) + __field(__u32, err) + __field(__u16, error_code) + __field(__u64, cr2) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->vector = vector; + __entry->nr = nr; + __entry->err = err; + __entry->error_code = error_code; + __entry->cr2 = cr2; + ), + TP_printk("vcpu %d vector %x/%x err %x/%x address %llx", + __entry->vcpu, + __entry->vector, __entry->nr, + __entry->err, __entry->error_code, + __entry->cr2 + ) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_trap_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_event_desc_send, + TP_PROTO(__u16 vcpu, __u8 descriptor, __u8 write), + TP_ARGS(vcpu, descriptor, write), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u8, descriptor) + __field(__u8, write) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->descriptor = descriptor; + __entry->write = write; + ), + TP_printk("vcpu %d %s %s", + __entry->vcpu, + __entry->write ? "write" : "read", + trace_print_symbols_seq(p, __entry->descriptor, + kvmi_descriptor_symbol) + ) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_desc_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_create_vcpu_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_create_vcpu_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_pause_vcpu_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_pause_vcpu_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_singlestep_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_singlestep_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_run_singlestep, + TP_PROTO(struct kvm_vcpu *vcpu, __u64 gpa, __u8 access, __u8 level, + size_t custom_data), + TP_ARGS(vcpu, gpa, access, level, custom_data), + TP_STRUCT__entry( + __field(__u16, vcpu_id) + __field(__u64, gpa) + __field(__u8, access) + __field(size_t, len) + __array(__u8, insn, 15) + __field(__u8, level) + __field(size_t, custom_data) + ), + TP_fast_assign( + __entry->vcpu_id = vcpu->vcpu_id; + __entry->gpa = gpa; + __entry->access = access; + __entry->len = min_t(size_t, 15, + vcpu->arch.emulate_ctxt.fetch.ptr + - vcpu->arch.emulate_ctxt.fetch.data); + memcpy(__entry->insn, vcpu->arch.emulate_ctxt.fetch.data, 15); + __entry->level = level; + __entry->custom_data = custom_data; + ), + TP_printk("vcpu %d gpa %llx %s insn %s level %x custom %zu", + __entry->vcpu_id, + __entry->gpa, + KVMI_ACCESS_PRINTK(), + __print_hex(__entry->insn, __entry->len), + __entry->level, + __entry->custom_data + ) +); + +TRACE_EVENT( + kvmi_stop_singlestep, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu), + TP_STRUCT__entry( + __field(__u16, vcpu) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + ), + TP_printk("vcpu %d", __entry->vcpu + ) +); + +TRACE_EVENT( + kvmi_mem_map, + TP_PROTO(struct kvm *kvm, gpa_t req_gpa, gpa_t map_gpa), + TP_ARGS(kvm, req_gpa, map_gpa), + TP_STRUCT__entry( + __field_struct(uuid_t, uuid) + __field(gpa_t, req_gpa) + __field(gpa_t, map_gpa) + ), + TP_fast_assign( + struct kvmi *ikvm = kvmi_get(kvm); + + if (ikvm) { + memcpy(&__entry->uuid, &ikvm->uuid, sizeof(uuid_t)); + kvmi_put(kvm); + } else + memset(&__entry->uuid, 0, sizeof(uuid_t)); + __entry->req_gpa = req_gpa; + __entry->map_gpa = map_gpa; + ), + TP_printk("vm %pU req_gpa %llx map_gpa %llx", + &__entry->uuid, + __entry->req_gpa, + __entry->map_gpa + ) +); + +TRACE_EVENT( + kvmi_mem_unmap, + TP_PROTO(gpa_t map_gpa), + TP_ARGS(map_gpa), + TP_STRUCT__entry( + __field(gpa_t, map_gpa) + ), + TP_fast_assign( + __entry->map_gpa = map_gpa; + ), + TP_printk("map_gpa %llx", + __entry->map_gpa + ) +); + +#define EXS(x) { x##_VECTOR, "#" #x } + +#define kvm_trace_sym_exc \ + EXS(DE), EXS(DB), EXS(BP), EXS(OF), EXS(BR), EXS(UD), EXS(NM), \ + EXS(DF), EXS(TS), EXS(NP), EXS(SS), EXS(GP), EXS(PF), \ + EXS(MF), EXS(AC), EXS(MC) + +TRACE_EVENT( + kvmi_cmd_inject_exception, + TP_PROTO(struct kvm_vcpu *vcpu, struct x86_exception *fault), + TP_ARGS(vcpu, fault), + TP_STRUCT__entry( + __field(__u16, vcpu_id) + __field(__u8, vector) + __field(__u64, address) + __field(__u16, error_code) + __field(bool, error_code_valid) + ), + TP_fast_assign( + __entry->vcpu_id = vcpu->vcpu_id; + __entry->vector = fault->vector; + __entry->address = fault->address; + __entry->error_code = fault->error_code; + __entry->error_code_valid = fault->error_code_valid; + ), + TP_printk("vcpu %d %s address %llx error %x", + __entry->vcpu_id, + __print_symbolic(__entry->vector, kvm_trace_sym_exc), + __entry->vector == PF_VECTOR ? __entry->address : 0, + __entry->error_code_valid ? __entry->error_code : 0 + ) +); + +#endif /* _TRACE_KVMI_H */ + +#include diff --git a/virt/kvm/kvmi.c b/virt/kvm/kvmi.c index 157f3a401d64..ce28ca8c8d77 100644 --- a/virt/kvm/kvmi.c +++ b/virt/kvm/kvmi.c @@ -12,6 +12,9 @@ #include #include +#define CREATE_TRACE_POINTS +#include + #define MAX_PAUSE_REQUESTS 1001 static struct kmem_cache *msg_cache; @@ -1284,6 +1287,8 @@ static void __kvmi_singlestep_event(struct kvm_vcpu *vcpu) { u32 action; + trace_kvmi_event_singlestep_send(vcpu->vcpu_id); + action = kvmi_send_singlestep(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -1291,6 +1296,8 @@ static void __kvmi_singlestep_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "SINGLESTEP"); } + + trace_kvmi_event_singlestep_recv(vcpu->vcpu_id, action); } static void kvmi_singlestep_event(struct kvm_vcpu *vcpu) @@ -1311,6 +1318,8 @@ static bool __kvmi_create_vcpu_event(struct kvm_vcpu *vcpu) u32 action; bool ret = false; + trace_kvmi_event_create_vcpu_send(vcpu->vcpu_id); + action = kvmi_msg_send_create_vcpu(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -1320,6 +1329,8 @@ static bool __kvmi_create_vcpu_event(struct kvm_vcpu *vcpu) kvmi_handle_common_event_actions(vcpu, action, "CREATE"); } + trace_kvmi_event_create_vcpu_recv(vcpu->vcpu_id, action); + return ret; } @@ -1345,6 +1356,8 @@ static bool __kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) u32 action; bool ret = false; + trace_kvmi_event_pause_vcpu_send(vcpu->vcpu_id); + action = kvmi_msg_send_pause_vcpu(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -1354,6 +1367,8 @@ static bool __kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) kvmi_handle_common_event_actions(vcpu, action, "PAUSE"); } + trace_kvmi_event_pause_vcpu_recv(vcpu->vcpu_id, action); + return ret; } @@ -1857,6 +1872,8 @@ void kvmi_stop_ss(struct kvm_vcpu *vcpu) ivcpu->ss_owner = false; + trace_kvmi_stop_singlestep(vcpu->vcpu_id); + kvmi_singlestep_event(vcpu); out: @@ -1892,6 +1909,9 @@ static bool kvmi_run_ss(struct kvm_vcpu *vcpu, gpa_t gpa, u8 access) gfn_t gfn = gpa_to_gfn(gpa); int err; + trace_kvmi_run_singlestep(vcpu, gpa, access, ikvm->ss_level, + IVCPU(vcpu)->ctx_size); + kvmi_arch_start_single_step(vcpu); err = write_custom_data(vcpu); diff --git a/virt/kvm/kvmi_mem.c b/virt/kvm/kvmi_mem.c index 6244add60062..a7a01646ea5c 100644 --- a/virt/kvm/kvmi_mem.c +++ b/virt/kvm/kvmi_mem.c @@ -23,6 +23,7 @@ #include #include +#include #include "kvmi_int.h" @@ -221,6 +222,8 @@ int kvmi_host_mem_map(struct kvm_vcpu *vcpu, gva_t tkn_gva, } req_mm = target_kvm->mm; + trace_kvmi_mem_map(target_kvm, req_gpa, map_gpa); + /* translate source addresses */ req_gfn = gpa_to_gfn(req_gpa); req_hva = gfn_to_hva_safe(target_kvm, req_gfn); @@ -274,6 +277,8 @@ int kvmi_host_mem_unmap(struct kvm_vcpu *vcpu, gpa_t map_gpa) kvm_debug("kvmi: unmapping request for map_gpa %016llx\n", map_gpa); + trace_kvmi_mem_unmap(map_gpa); + /* convert GPA -> HVA */ map_gfn = gpa_to_gfn(map_gpa); map_hva = gfn_to_hva_safe(vcpu->kvm, map_gfn); diff --git a/virt/kvm/kvmi_msg.c b/virt/kvm/kvmi_msg.c index a5f87aafa237..bdb1e60906f9 100644 --- a/virt/kvm/kvmi_msg.c +++ b/virt/kvm/kvmi_msg.c @@ -8,6 +8,8 @@ #include #include "kvmi_int.h" +#include + typedef int (*vcpu_reply_fct)(struct kvm_vcpu *vcpu, const struct kvmi_msg_hdr *msg, int err, const void *rpl, size_t rpl_size); @@ -165,6 +167,8 @@ static int kvmi_msg_vm_reply(struct kvmi *ikvm, const struct kvmi_msg_hdr *msg, int err, const void *rpl, size_t rpl_size) { + trace_kvmi_vm_reply(msg->id, msg->seq, err); + return kvmi_msg_reply(ikvm, msg, err, rpl, rpl_size); } @@ -202,6 +206,8 @@ int kvmi_msg_vcpu_reply(struct kvm_vcpu *vcpu, const struct kvmi_msg_hdr *msg, int err, const void *rpl, size_t rpl_size) { + trace_kvmi_vcpu_reply(vcpu->vcpu_id, msg->id, msg->seq, err); + return kvmi_msg_reply(IKVM(vcpu->kvm), msg, err, rpl, rpl_size); } @@ -559,6 +565,8 @@ static int handle_event_reply(struct kvm_vcpu *vcpu, struct kvmi_vcpu_reply *expected = &ivcpu->reply; size_t useful, received, common; + trace_kvmi_event_reply(reply->event, msg->seq); + if (unlikely(msg->seq != expected->seq)) goto out; @@ -883,6 +891,8 @@ static struct kvmi_msg_hdr *kvmi_msg_recv(struct kvmi *ikvm, bool *unsupported) static int kvmi_msg_dispatch_vm_cmd(struct kvmi *ikvm, const struct kvmi_msg_hdr *msg) { + trace_kvmi_vm_command(msg->id, msg->seq); + return msg_vm[msg->id](ikvm, msg, msg + 1); } @@ -895,6 +905,8 @@ static int kvmi_msg_dispatch_vcpu_job(struct kvmi *ikvm, struct kvm_vcpu *vcpu = NULL; int err; + trace_kvmi_vcpu_command(cmd->vcpu, hdr->id, hdr->seq); + if (invalid_vcpu_hdr(cmd)) return -KVM_EINVAL; @@ -1051,6 +1063,8 @@ int kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id, ivcpu->reply.size = rpl_size; ivcpu->reply.error = -EINTR; + trace_kvmi_event(vcpu->vcpu_id, common.event, hdr.seq); + err = kvmi_sock_write(ikvm, vec, n, msg_size); if (err) goto out; @@ -1091,6 +1105,8 @@ int kvmi_msg_send_unhook(struct kvmi *ikvm) kvmi_setup_event_common(&common, KVMI_EVENT_UNHOOK, 0); + trace_kvmi_event(0, common.event, hdr.seq); + return kvmi_sock_write(ikvm, vec, n, msg_size); } From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B214C433FF for ; Fri, 9 Aug 2019 16:05:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5CD82086A for ; Fri, 9 Aug 2019 16:05:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5CD82086A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bitdefender.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AFB3A6B02AC; Fri, 9 Aug 2019 12:01:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ACD736B02AF; Fri, 9 Aug 2019 12:01:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FE116B02B1; Fri, 9 Aug 2019 12:01:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by kanga.kvack.org (Postfix) with ESMTP id DC6CF6B02AB for ; Fri, 9 Aug 2019 12:01:42 -0400 (EDT) Received: by mail-wr1-f70.google.com with SMTP id t10so1728511wrn.10 for ; Fri, 09 Aug 2019 09:01:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=JLQZl1tbJIIRjx+fDFzEMmGGP8XQxgt9B311mYu/xHY=; b=CkKjkzuD6x7SK76AfAHQ1Pry4L/ZN1YNukFMFNKxgjERxhJcCBbLfmjn+mYfP16+Er eg1lMvoAiSL1VTfJRm2q4SmPIAQgaXVzZ4FhA8e+SMJL3CDQR7MWY1wSeeQ7KuB4mXd/ zVCi+K6pQReeNCQ0UX8iuyE4/yw7WRUzHV9rJ/xMWc5ZIZEjP2k1lOqAGzjPtsDlvqhn ltxvYaorjEyiVE1RGQT9vIoa4dFNDK97UMvQrxAnUIMIXU0CFpN5ksHAhfSIkvjLmB71 fOoC2DSPXOibfr+SgJaKf2Z1+ritkRfyv/71NBvxn90eYYMko+qGvnxWLusj7jngLMdB CDxw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) smtp.mailfrom=alazar@bitdefender.com X-Gm-Message-State: APjAAAUSGrnk9prwDHGhmSkS3IjhmC/tjl+O7vy7ZAEiXxYLWihZjXQC w9f6G3m0NQxChbTMFH2bjBTn/NdSPOkTWiV0qIVIGqPGuGWzXUhw/r+NGznEyIqYPAHvjqYf+PJ /Llt4QP5JsN8Pp0mS9i41b8u33rO25e9/AxUc372TskOAbV+ZhOuHPEW6u+KrZwrrsg== X-Received: by 2002:adf:f646:: with SMTP id x6mr26304499wrp.18.1565366502406; Fri, 09 Aug 2019 09:01:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqx2qRlxSkqX6HCyMq1mbba/TEC0Jwt0T7m9xPZYKEXsRThArUp6q4L+ZFb/+u9fWtATa/gU X-Received: by 2002:adf:f646:: with SMTP id x6mr26304243wrp.18.1565366499951; Fri, 09 Aug 2019 09:01:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565366499; cv=none; d=google.com; s=arc-20160816; b=IbXsP90Mc1ITXAHrX496//asLLF66LZ0z2P8cgLe3hGV/WXqKZDQ/scbYmYo7GheIf vKty1SyUzAFCPof1XDO1DnbXzRplpMzmglMChYb1Gdj8qCBIesBufSkTcbP7y+zlfdlJ sPkww9JxltnAraNo/2Odwwr7c/dHRPfEYtfbrF8Oe/O0pfhy/9zbTvOys1xI4u1WcwDU tEM4RN0x/aQmVhL/Jqq5+sFQQRYdkWMGfBnYegqe0bsCmsEtDCPT8QKYMdzgXgCdkRQt Pc8bSNSnQY0ucDkaVfD23B0/5d+zF9g76U6Q8iDThQ+XXgMHZrX0f+ixVjyC+s5QyzAY sjVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=JLQZl1tbJIIRjx+fDFzEMmGGP8XQxgt9B311mYu/xHY=; b=s36LeOZcmivHBVGzF/EUEpMXyQQ2r01qpMI6RC1kt6tz9S+ToRNlC/bK849EwjVEvQ p3Ug/n1qFuI4hEXadwATcfFonpLCapLV8290HChZEHeQ3oWXTujrJkIS5lxJ6rmx+dth T9pfhFoor1YhEtmi743U0hg7I9eSu+LAqXunMzqW3K7+rFx7C9Gm/iM4CrHEj9cZylUg /PDEoVKa8RmGmEPANnj+seyOTwaz7hSflwIWux+hCFIvHMCWznDG0QMe/nsojqQcrYrI cIiOD3amJB0Td9eWEp9tiRBmBh8Yehccb2qxNh2qayjUPavpbRp+GrPs7/xAXogGSTzN DeSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) smtp.mailfrom=alazar@bitdefender.com Received: from mx01.bbu.dsd.mx.bitdefender.com (mx01.bbu.dsd.mx.bitdefender.com. [91.199.104.161]) by mx.google.com with ESMTPS id j10si3746596wrn.373.2019.08.09.09.01.39 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Aug 2019 09:01:39 -0700 (PDT) Received-SPF: pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) client-ip=91.199.104.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of alazar@bitdefender.com designates 91.199.104.161 as permitted sender) smtp.mailfrom=alazar@bitdefender.com Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 545FA305D35F; Fri, 9 Aug 2019 19:01:39 +0300 (EEST) Received: from localhost.localdomain (unknown [89.136.169.210]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 026BE305B7A0; Fri, 9 Aug 2019 19:01:38 +0300 (EEST) From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , Zhang@kvack.org, Yu C , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , =?UTF-8?q?Adalbert=20Laz=C4=83r?= , =?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= , =?UTF-8?q?Mircea=20C=C3=AErjaliu?= , Marian Rotariu Subject: [RFC PATCH v6 77/92] kvm: introspection: add trace functions Date: Fri, 9 Aug 2019 19:00:32 +0300 Message-Id: <20190809160047.8319-78-alazar@bitdefender.com> In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Co-developed-by: Nicușor Cîțu Signed-off-by: Nicușor Cîțu Co-developed-by: Mircea Cîrjaliu Signed-off-by: Mircea Cîrjaliu Co-developed-by: Marian Rotariu Signed-off-by: Marian Rotariu Co-developed-by: Adalbert Lazăr Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/kvmi.c | 63 ++++ include/trace/events/kvmi.h | 680 ++++++++++++++++++++++++++++++++++++ virt/kvm/kvmi.c | 20 ++ virt/kvm/kvmi_mem.c | 5 + virt/kvm/kvmi_msg.c | 16 + 5 files changed, 784 insertions(+) create mode 100644 include/trace/events/kvmi.h diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 5312f179af9c..171e76449271 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -9,6 +9,8 @@ #include #include "../../../virt/kvm/kvmi_int.h" +#include + static unsigned long *msr_mask(struct kvm_vcpu *vcpu, unsigned int *msr) { switch (*msr) { @@ -102,6 +104,9 @@ static bool __kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr) if (old_msr.data == msr->data) return true; + trace_kvmi_event_msr_send(vcpu->vcpu_id, msr->index, old_msr.data, + msr->data); + action = kvmi_send_msr(vcpu, msr->index, old_msr.data, msr->data, &ret_value); switch (action) { @@ -113,6 +118,8 @@ static bool __kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr) kvmi_handle_common_event_actions(vcpu, action, "MSR"); } + trace_kvmi_event_msr_recv(vcpu->vcpu_id, action, ret_value); + return ret; } @@ -387,6 +394,8 @@ static bool __kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr, if (!test_bit(cr, IVCPU(vcpu)->cr_mask)) return true; + trace_kvmi_event_cr_send(vcpu->vcpu_id, cr, old_value, *new_value); + action = kvmi_send_cr(vcpu, cr, old_value, *new_value, &ret_value); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -397,6 +406,8 @@ static bool __kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr, kvmi_handle_common_event_actions(vcpu, action, "CR"); } + trace_kvmi_event_cr_recv(vcpu->vcpu_id, action, ret_value); + return ret; } @@ -437,6 +448,8 @@ static void __kvmi_xsetbv_event(struct kvm_vcpu *vcpu) { u32 action; + trace_kvmi_event_xsetbv_send(vcpu->vcpu_id); + action = kvmi_send_xsetbv(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -444,6 +457,8 @@ static void __kvmi_xsetbv_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "XSETBV"); } + + trace_kvmi_event_xsetbv_recv(vcpu->vcpu_id, action); } void kvmi_xsetbv_event(struct kvm_vcpu *vcpu) @@ -460,12 +475,26 @@ void kvmi_xsetbv_event(struct kvm_vcpu *vcpu) kvmi_put(vcpu->kvm); } +static u64 get_next_rip(struct kvm_vcpu *vcpu) +{ + struct kvmi_vcpu *ivcpu = IVCPU(vcpu); + + if (ivcpu->have_delayed_regs) + return ivcpu->delayed_regs.rip; + else + return kvm_rip_read(vcpu); +} + void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len) { u32 action; u64 gpa; + u64 old_rip; gpa = kvm_mmu_gva_to_gpa_system(vcpu, gva, 0, NULL); + old_rip = kvm_rip_read(vcpu); + + trace_kvmi_event_bp_send(vcpu->vcpu_id, gpa, old_rip); action = kvmi_msg_send_bp(vcpu, gpa, insn_len); switch (action) { @@ -478,6 +507,8 @@ void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len) default: kvmi_handle_common_event_actions(vcpu, action, "BP"); } + + trace_kvmi_event_bp_recv(vcpu->vcpu_id, action, get_next_rip(vcpu)); } #define KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT 24 @@ -504,6 +535,8 @@ void kvmi_arch_hypercall_event(struct kvm_vcpu *vcpu) { u32 action; + trace_kvmi_event_hc_send(vcpu->vcpu_id); + action = kvmi_msg_send_hypercall(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -511,6 +544,8 @@ void kvmi_arch_hypercall_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "HYPERCALL"); } + + trace_kvmi_event_hc_recv(vcpu->vcpu_id, action); } bool kvmi_arch_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, @@ -532,6 +567,9 @@ bool kvmi_arch_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, if (ivcpu->effective_rep_complete) return true; + trace_kvmi_event_pf_send(vcpu->vcpu_id, gpa, gva, access, + kvm_rip_read(vcpu)); + action = kvmi_msg_send_pf(vcpu, gpa, gva, access, &ivcpu->ss_requested, &ivcpu->rep_complete, &ctx_addr, ivcpu->ctx_data, &ctx_size); @@ -553,6 +591,9 @@ bool kvmi_arch_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, kvmi_handle_common_event_actions(vcpu, action, "PF"); } + trace_kvmi_event_pf_recv(vcpu->vcpu_id, action, get_next_rip(vcpu), + ctx_size, ivcpu->ss_requested, ret); + return ret; } @@ -628,6 +669,11 @@ void kvmi_arch_trap_event(struct kvm_vcpu *vcpu) err = 0; } + trace_kvmi_event_trap_send(vcpu->vcpu_id, vector, + IVCPU(vcpu)->exception.nr, + err, IVCPU(vcpu)->exception.error_code, + vcpu->arch.cr2); + action = kvmi_send_trap(vcpu, vector, type, err, vcpu->arch.cr2); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -635,6 +681,8 @@ void kvmi_arch_trap_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "TRAP"); } + + trace_kvmi_event_trap_recv(vcpu->vcpu_id, action); } static bool __kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, @@ -643,6 +691,8 @@ static bool __kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, u32 action; bool ret = false; + trace_kvmi_event_desc_send(vcpu->vcpu_id, descriptor, write); + action = kvmi_msg_send_descriptor(vcpu, descriptor, write); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -654,6 +704,8 @@ static bool __kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, kvmi_handle_common_event_actions(vcpu, action, "DESC"); } + trace_kvmi_event_desc_recv(vcpu->vcpu_id, action); + return ret; } @@ -718,6 +770,15 @@ int kvmi_arch_cmd_inject_exception(struct kvm_vcpu *vcpu, u8 vector, bool error_code_valid, u32 error_code, u64 address) { + struct x86_exception e = { + .error_code_valid = error_code_valid, + .error_code = error_code, + .address = address, + .vector = vector, + }; + + trace_kvmi_cmd_inject_exception(vcpu, &e); + if (!(is_vector_valid(vector) && is_gva_valid(vcpu, address))) return -KVM_EINVAL; @@ -876,6 +937,8 @@ void kvmi_arch_update_page_tracking(struct kvm *kvm, return; } + trace_kvmi_set_gfn_access(m->gfn, m->access, m->write_bitmap, slot->id); + for (i = 0; i < ARRAY_SIZE(track_modes); i++) { unsigned int allow_bit = track_modes[i].allow_bit; enum kvm_page_track_mode mode = track_modes[i].track_mode; diff --git a/include/trace/events/kvmi.h b/include/trace/events/kvmi.h new file mode 100644 index 000000000000..442189437fe7 --- /dev/null +++ b/include/trace/events/kvmi.h @@ -0,0 +1,680 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM kvmi + +#if !defined(_TRACE_KVMI_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_KVMI_H + +#include + +#ifndef __TRACE_KVMI_STRUCTURES +#define __TRACE_KVMI_STRUCTURES + +#undef EN +#define EN(x) { x, #x } + +static const struct trace_print_flags kvmi_msg_id_symbol[] = { + EN(KVMI_GET_VERSION), + EN(KVMI_CHECK_COMMAND), + EN(KVMI_CHECK_EVENT), + EN(KVMI_GET_GUEST_INFO), + EN(KVMI_GET_VCPU_INFO), + EN(KVMI_GET_REGISTERS), + EN(KVMI_SET_REGISTERS), + EN(KVMI_GET_PAGE_ACCESS), + EN(KVMI_SET_PAGE_ACCESS), + EN(KVMI_GET_PAGE_WRITE_BITMAP), + EN(KVMI_SET_PAGE_WRITE_BITMAP), + EN(KVMI_INJECT_EXCEPTION), + EN(KVMI_READ_PHYSICAL), + EN(KVMI_WRITE_PHYSICAL), + EN(KVMI_GET_MAP_TOKEN), + EN(KVMI_CONTROL_EVENTS), + EN(KVMI_CONTROL_CR), + EN(KVMI_CONTROL_MSR), + EN(KVMI_EVENT), + EN(KVMI_EVENT_REPLY), + EN(KVMI_GET_CPUID), + EN(KVMI_GET_XSAVE), + EN(KVMI_PAUSE_VCPU), + EN(KVMI_CONTROL_VM_EVENTS), + EN(KVMI_GET_MTRR_TYPE), + EN(KVMI_CONTROL_SPP), + EN(KVMI_CONTROL_CMD_RESPONSE), + {-1, NULL} +}; + +static const struct trace_print_flags kvmi_descriptor_symbol[] = { + EN(KVMI_DESC_IDTR), + EN(KVMI_DESC_GDTR), + EN(KVMI_DESC_LDTR), + EN(KVMI_DESC_TR), + {-1, NULL} +}; + +static const struct trace_print_flags kvmi_event_symbol[] = { + EN(KVMI_EVENT_UNHOOK), + EN(KVMI_EVENT_CR), + EN(KVMI_EVENT_MSR), + EN(KVMI_EVENT_XSETBV), + EN(KVMI_EVENT_BREAKPOINT), + EN(KVMI_EVENT_HYPERCALL), + EN(KVMI_EVENT_PF), + EN(KVMI_EVENT_TRAP), + EN(KVMI_EVENT_DESCRIPTOR), + EN(KVMI_EVENT_CREATE_VCPU), + EN(KVMI_EVENT_PAUSE_VCPU), + EN(KVMI_EVENT_SINGLESTEP), + { -1, NULL } +}; + +static const struct trace_print_flags kvmi_action_symbol[] = { + {KVMI_EVENT_ACTION_CONTINUE, "continue"}, + {KVMI_EVENT_ACTION_RETRY, "retry"}, + {KVMI_EVENT_ACTION_CRASH, "crash"}, + {-1, NULL} +}; + +#endif /* __TRACE_KVMI_STRUCTURES */ + +TRACE_EVENT( + kvmi_vm_command, + TP_PROTO(__u16 id, __u32 seq), + TP_ARGS(id, seq), + TP_STRUCT__entry( + __field(__u16, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("%s seq %d", + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq) +); + +TRACE_EVENT( + kvmi_vm_reply, + TP_PROTO(__u16 id, __u32 seq, __s32 err), + TP_ARGS(id, seq, err), + TP_STRUCT__entry( + __field(__u16, id) + __field(__u32, seq) + __field(__s32, err) + ), + TP_fast_assign( + __entry->id = id; + __entry->seq = seq; + __entry->err = err; + ), + TP_printk("%s seq %d err %d", + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq, + __entry->err) +); + +TRACE_EVENT( + kvmi_vcpu_command, + TP_PROTO(__u16 vcpu, __u16 id, __u32 seq), + TP_ARGS(vcpu, id, seq), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u16, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("vcpu %d %s seq %d", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq) +); + +TRACE_EVENT( + kvmi_vcpu_reply, + TP_PROTO(__u16 vcpu, __u16 id, __u32 seq, __s32 err), + TP_ARGS(vcpu, id, seq, err), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u16, id) + __field(__u32, seq) + __field(__s32, err) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->id = id; + __entry->seq = seq; + __entry->err = err; + ), + TP_printk("vcpu %d %s seq %d err %d", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->id, kvmi_msg_id_symbol), + __entry->seq, + __entry->err) +); + +TRACE_EVENT( + kvmi_event, + TP_PROTO(__u16 vcpu, __u32 id, __u32 seq), + TP_ARGS(vcpu, id, seq), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("vcpu %d %s seq %d", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->id, kvmi_event_symbol), + __entry->seq) +); + +TRACE_EVENT( + kvmi_event_reply, + TP_PROTO(__u32 id, __u32 seq), + TP_ARGS(id, seq), + TP_STRUCT__entry( + __field(__u32, id) + __field(__u32, seq) + ), + TP_fast_assign( + __entry->id = id; + __entry->seq = seq; + ), + TP_printk("%s seq %d", + trace_print_symbols_seq(p, __entry->id, kvmi_event_symbol), + __entry->seq) +); + +#define KVMI_ACCESS_PRINTK() ({ \ + const char *saved_ptr = trace_seq_buffer_ptr(p); \ + static const char * const access_str[] = { \ + "---", "r--", "-w-", "rw-", "--x", "r-x", "-wx", "rwx" \ + }; \ + trace_seq_printf(p, "%s", access_str[__entry->access & 7]); \ + saved_ptr; \ +}) + +TRACE_EVENT( + kvmi_set_gfn_access, + TP_PROTO(__u64 gfn, __u8 access, __u32 bitmap, __u16 slot), + TP_ARGS(gfn, access, bitmap, slot), + TP_STRUCT__entry( + __field(__u64, gfn) + __field(__u8, access) + __field(__u32, bitmap) + __field(__u16, slot) + ), + TP_fast_assign( + __entry->gfn = gfn; + __entry->access = access; + __entry->bitmap = bitmap; + __entry->slot = slot; + ), + TP_printk("gfn %llx %s write bitmap %x slot %d", + __entry->gfn, KVMI_ACCESS_PRINTK(), + __entry->bitmap, __entry->slot) +); + +DECLARE_EVENT_CLASS( + kvmi_event_send_template, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu), + TP_STRUCT__entry( + __field(__u16, vcpu) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + ), + TP_printk("vcpu %d", + __entry->vcpu + ) +); +DECLARE_EVENT_CLASS( + kvmi_event_recv_template, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + ), + TP_printk("vcpu %d %s", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol) + ) +); + +TRACE_EVENT( + kvmi_event_cr_send, + TP_PROTO(__u16 vcpu, __u32 cr, __u64 old_value, __u64 new_value), + TP_ARGS(vcpu, cr, old_value, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, cr) + __field(__u64, old_value) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->cr = cr; + __entry->old_value = old_value; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d cr %x old_value %llx new_value %llx", + __entry->vcpu, + __entry->cr, + __entry->old_value, + __entry->new_value + ) +); +TRACE_EVENT( + kvmi_event_cr_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 new_value), + TP_ARGS(vcpu, action, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d %s new_value %llx", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->new_value + ) +); + +TRACE_EVENT( + kvmi_event_msr_send, + TP_PROTO(__u16 vcpu, __u32 msr, __u64 old_value, __u64 new_value), + TP_ARGS(vcpu, msr, old_value, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, msr) + __field(__u64, old_value) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->msr = msr; + __entry->old_value = old_value; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d msr %x old_value %llx new_value %llx", + __entry->vcpu, + __entry->msr, + __entry->old_value, + __entry->new_value + ) +); +TRACE_EVENT( + kvmi_event_msr_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 new_value), + TP_ARGS(vcpu, action, new_value), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, new_value) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->new_value = new_value; + ), + TP_printk("vcpu %d %s new_value %llx", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->new_value + ) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_xsetbv_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_xsetbv_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_event_bp_send, + TP_PROTO(__u16 vcpu, __u64 gpa, __u64 old_rip), + TP_ARGS(vcpu, gpa, old_rip), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u64, gpa) + __field(__u64, old_rip) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->gpa = gpa; + __entry->old_rip = old_rip; + ), + TP_printk("vcpu %d gpa %llx rip %llx", + __entry->vcpu, + __entry->gpa, + __entry->old_rip + ) +); +TRACE_EVENT( + kvmi_event_bp_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 new_rip), + TP_ARGS(vcpu, action, new_rip), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, new_rip) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->new_rip = new_rip; + ), + TP_printk("vcpu %d %s rip %llx", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->new_rip + ) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_hc_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_hc_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_event_pf_send, + TP_PROTO(__u16 vcpu, __u64 gpa, __u64 gva, __u8 access, __u64 rip), + TP_ARGS(vcpu, gpa, gva, access, rip), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u64, gpa) + __field(__u64, gva) + __field(__u8, access) + __field(__u64, rip) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->gpa = gpa; + __entry->gva = gva; + __entry->access = access; + __entry->rip = rip; + ), + TP_printk("vcpu %d gpa %llx %s gva %llx rip %llx", + __entry->vcpu, + __entry->gpa, + KVMI_ACCESS_PRINTK(), + __entry->gva, + __entry->rip + ) +); +TRACE_EVENT( + kvmi_event_pf_recv, + TP_PROTO(__u16 vcpu, __u32 action, __u64 next_rip, size_t custom_data, + bool singlestep, bool ret), + TP_ARGS(vcpu, action, next_rip, custom_data, singlestep, ret), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, action) + __field(__u64, next_rip) + __field(size_t, custom_data) + __field(bool, singlestep) + __field(bool, ret) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->action = action; + __entry->next_rip = next_rip; + __entry->custom_data = custom_data; + __entry->singlestep = singlestep; + __entry->ret = ret; + ), + TP_printk("vcpu %d %s rip %llx custom %zu %s", + __entry->vcpu, + trace_print_symbols_seq(p, __entry->action, + kvmi_action_symbol), + __entry->next_rip, __entry->custom_data, + (__entry->singlestep ? (__entry->ret ? "singlestep failed" : + "singlestep running") + : "") + ) +); + +TRACE_EVENT( + kvmi_event_trap_send, + TP_PROTO(__u16 vcpu, __u32 vector, __u8 nr, __u32 err, __u16 error_code, + __u64 cr2), + TP_ARGS(vcpu, vector, nr, err, error_code, cr2), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u32, vector) + __field(__u8, nr) + __field(__u32, err) + __field(__u16, error_code) + __field(__u64, cr2) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->vector = vector; + __entry->nr = nr; + __entry->err = err; + __entry->error_code = error_code; + __entry->cr2 = cr2; + ), + TP_printk("vcpu %d vector %x/%x err %x/%x address %llx", + __entry->vcpu, + __entry->vector, __entry->nr, + __entry->err, __entry->error_code, + __entry->cr2 + ) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_trap_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_event_desc_send, + TP_PROTO(__u16 vcpu, __u8 descriptor, __u8 write), + TP_ARGS(vcpu, descriptor, write), + TP_STRUCT__entry( + __field(__u16, vcpu) + __field(__u8, descriptor) + __field(__u8, write) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + __entry->descriptor = descriptor; + __entry->write = write; + ), + TP_printk("vcpu %d %s %s", + __entry->vcpu, + __entry->write ? "write" : "read", + trace_print_symbols_seq(p, __entry->descriptor, + kvmi_descriptor_symbol) + ) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_desc_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_create_vcpu_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_create_vcpu_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_pause_vcpu_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_pause_vcpu_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +DEFINE_EVENT(kvmi_event_send_template, kvmi_event_singlestep_send, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu) +); +DEFINE_EVENT(kvmi_event_recv_template, kvmi_event_singlestep_recv, + TP_PROTO(__u16 vcpu, __u32 action), + TP_ARGS(vcpu, action) +); + +TRACE_EVENT( + kvmi_run_singlestep, + TP_PROTO(struct kvm_vcpu *vcpu, __u64 gpa, __u8 access, __u8 level, + size_t custom_data), + TP_ARGS(vcpu, gpa, access, level, custom_data), + TP_STRUCT__entry( + __field(__u16, vcpu_id) + __field(__u64, gpa) + __field(__u8, access) + __field(size_t, len) + __array(__u8, insn, 15) + __field(__u8, level) + __field(size_t, custom_data) + ), + TP_fast_assign( + __entry->vcpu_id = vcpu->vcpu_id; + __entry->gpa = gpa; + __entry->access = access; + __entry->len = min_t(size_t, 15, + vcpu->arch.emulate_ctxt.fetch.ptr + - vcpu->arch.emulate_ctxt.fetch.data); + memcpy(__entry->insn, vcpu->arch.emulate_ctxt.fetch.data, 15); + __entry->level = level; + __entry->custom_data = custom_data; + ), + TP_printk("vcpu %d gpa %llx %s insn %s level %x custom %zu", + __entry->vcpu_id, + __entry->gpa, + KVMI_ACCESS_PRINTK(), + __print_hex(__entry->insn, __entry->len), + __entry->level, + __entry->custom_data + ) +); + +TRACE_EVENT( + kvmi_stop_singlestep, + TP_PROTO(__u16 vcpu), + TP_ARGS(vcpu), + TP_STRUCT__entry( + __field(__u16, vcpu) + ), + TP_fast_assign( + __entry->vcpu = vcpu; + ), + TP_printk("vcpu %d", __entry->vcpu + ) +); + +TRACE_EVENT( + kvmi_mem_map, + TP_PROTO(struct kvm *kvm, gpa_t req_gpa, gpa_t map_gpa), + TP_ARGS(kvm, req_gpa, map_gpa), + TP_STRUCT__entry( + __field_struct(uuid_t, uuid) + __field(gpa_t, req_gpa) + __field(gpa_t, map_gpa) + ), + TP_fast_assign( + struct kvmi *ikvm = kvmi_get(kvm); + + if (ikvm) { + memcpy(&__entry->uuid, &ikvm->uuid, sizeof(uuid_t)); + kvmi_put(kvm); + } else + memset(&__entry->uuid, 0, sizeof(uuid_t)); + __entry->req_gpa = req_gpa; + __entry->map_gpa = map_gpa; + ), + TP_printk("vm %pU req_gpa %llx map_gpa %llx", + &__entry->uuid, + __entry->req_gpa, + __entry->map_gpa + ) +); + +TRACE_EVENT( + kvmi_mem_unmap, + TP_PROTO(gpa_t map_gpa), + TP_ARGS(map_gpa), + TP_STRUCT__entry( + __field(gpa_t, map_gpa) + ), + TP_fast_assign( + __entry->map_gpa = map_gpa; + ), + TP_printk("map_gpa %llx", + __entry->map_gpa + ) +); + +#define EXS(x) { x##_VECTOR, "#" #x } + +#define kvm_trace_sym_exc \ + EXS(DE), EXS(DB), EXS(BP), EXS(OF), EXS(BR), EXS(UD), EXS(NM), \ + EXS(DF), EXS(TS), EXS(NP), EXS(SS), EXS(GP), EXS(PF), \ + EXS(MF), EXS(AC), EXS(MC) + +TRACE_EVENT( + kvmi_cmd_inject_exception, + TP_PROTO(struct kvm_vcpu *vcpu, struct x86_exception *fault), + TP_ARGS(vcpu, fault), + TP_STRUCT__entry( + __field(__u16, vcpu_id) + __field(__u8, vector) + __field(__u64, address) + __field(__u16, error_code) + __field(bool, error_code_valid) + ), + TP_fast_assign( + __entry->vcpu_id = vcpu->vcpu_id; + __entry->vector = fault->vector; + __entry->address = fault->address; + __entry->error_code = fault->error_code; + __entry->error_code_valid = fault->error_code_valid; + ), + TP_printk("vcpu %d %s address %llx error %x", + __entry->vcpu_id, + __print_symbolic(__entry->vector, kvm_trace_sym_exc), + __entry->vector == PF_VECTOR ? __entry->address : 0, + __entry->error_code_valid ? __entry->error_code : 0 + ) +); + +#endif /* _TRACE_KVMI_H */ + +#include diff --git a/virt/kvm/kvmi.c b/virt/kvm/kvmi.c index 157f3a401d64..ce28ca8c8d77 100644 --- a/virt/kvm/kvmi.c +++ b/virt/kvm/kvmi.c @@ -12,6 +12,9 @@ #include #include +#define CREATE_TRACE_POINTS +#include + #define MAX_PAUSE_REQUESTS 1001 static struct kmem_cache *msg_cache; @@ -1284,6 +1287,8 @@ static void __kvmi_singlestep_event(struct kvm_vcpu *vcpu) { u32 action; + trace_kvmi_event_singlestep_send(vcpu->vcpu_id); + action = kvmi_send_singlestep(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -1291,6 +1296,8 @@ static void __kvmi_singlestep_event(struct kvm_vcpu *vcpu) default: kvmi_handle_common_event_actions(vcpu, action, "SINGLESTEP"); } + + trace_kvmi_event_singlestep_recv(vcpu->vcpu_id, action); } static void kvmi_singlestep_event(struct kvm_vcpu *vcpu) @@ -1311,6 +1318,8 @@ static bool __kvmi_create_vcpu_event(struct kvm_vcpu *vcpu) u32 action; bool ret = false; + trace_kvmi_event_create_vcpu_send(vcpu->vcpu_id); + action = kvmi_msg_send_create_vcpu(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -1320,6 +1329,8 @@ static bool __kvmi_create_vcpu_event(struct kvm_vcpu *vcpu) kvmi_handle_common_event_actions(vcpu, action, "CREATE"); } + trace_kvmi_event_create_vcpu_recv(vcpu->vcpu_id, action); + return ret; } @@ -1345,6 +1356,8 @@ static bool __kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) u32 action; bool ret = false; + trace_kvmi_event_pause_vcpu_send(vcpu->vcpu_id); + action = kvmi_msg_send_pause_vcpu(vcpu); switch (action) { case KVMI_EVENT_ACTION_CONTINUE: @@ -1354,6 +1367,8 @@ static bool __kvmi_pause_vcpu_event(struct kvm_vcpu *vcpu) kvmi_handle_common_event_actions(vcpu, action, "PAUSE"); } + trace_kvmi_event_pause_vcpu_recv(vcpu->vcpu_id, action); + return ret; } @@ -1857,6 +1872,8 @@ void kvmi_stop_ss(struct kvm_vcpu *vcpu) ivcpu->ss_owner = false; + trace_kvmi_stop_singlestep(vcpu->vcpu_id); + kvmi_singlestep_event(vcpu); out: @@ -1892,6 +1909,9 @@ static bool kvmi_run_ss(struct kvm_vcpu *vcpu, gpa_t gpa, u8 access) gfn_t gfn = gpa_to_gfn(gpa); int err; + trace_kvmi_run_singlestep(vcpu, gpa, access, ikvm->ss_level, + IVCPU(vcpu)->ctx_size); + kvmi_arch_start_single_step(vcpu); err = write_custom_data(vcpu); diff --git a/virt/kvm/kvmi_mem.c b/virt/kvm/kvmi_mem.c index 6244add60062..a7a01646ea5c 100644 --- a/virt/kvm/kvmi_mem.c +++ b/virt/kvm/kvmi_mem.c @@ -23,6 +23,7 @@ #include #include +#include #include "kvmi_int.h" @@ -221,6 +222,8 @@ int kvmi_host_mem_map(struct kvm_vcpu *vcpu, gva_t tkn_gva, } req_mm = target_kvm->mm; + trace_kvmi_mem_map(target_kvm, req_gpa, map_gpa); + /* translate source addresses */ req_gfn = gpa_to_gfn(req_gpa); req_hva = gfn_to_hva_safe(target_kvm, req_gfn); @@ -274,6 +277,8 @@ int kvmi_host_mem_unmap(struct kvm_vcpu *vcpu, gpa_t map_gpa) kvm_debug("kvmi: unmapping request for map_gpa %016llx\n", map_gpa); + trace_kvmi_mem_unmap(map_gpa); + /* convert GPA -> HVA */ map_gfn = gpa_to_gfn(map_gpa); map_hva = gfn_to_hva_safe(vcpu->kvm, map_gfn); diff --git a/virt/kvm/kvmi_msg.c b/virt/kvm/kvmi_msg.c index a5f87aafa237..bdb1e60906f9 100644 --- a/virt/kvm/kvmi_msg.c +++ b/virt/kvm/kvmi_msg.c @@ -8,6 +8,8 @@ #include #include "kvmi_int.h" +#include + typedef int (*vcpu_reply_fct)(struct kvm_vcpu *vcpu, const struct kvmi_msg_hdr *msg, int err, const void *rpl, size_t rpl_size); @@ -165,6 +167,8 @@ static int kvmi_msg_vm_reply(struct kvmi *ikvm, const struct kvmi_msg_hdr *msg, int err, const void *rpl, size_t rpl_size) { + trace_kvmi_vm_reply(msg->id, msg->seq, err); + return kvmi_msg_reply(ikvm, msg, err, rpl, rpl_size); } @@ -202,6 +206,8 @@ int kvmi_msg_vcpu_reply(struct kvm_vcpu *vcpu, const struct kvmi_msg_hdr *msg, int err, const void *rpl, size_t rpl_size) { + trace_kvmi_vcpu_reply(vcpu->vcpu_id, msg->id, msg->seq, err); + return kvmi_msg_reply(IKVM(vcpu->kvm), msg, err, rpl, rpl_size); } @@ -559,6 +565,8 @@ static int handle_event_reply(struct kvm_vcpu *vcpu, struct kvmi_vcpu_reply *expected = &ivcpu->reply; size_t useful, received, common; + trace_kvmi_event_reply(reply->event, msg->seq); + if (unlikely(msg->seq != expected->seq)) goto out; @@ -883,6 +891,8 @@ static struct kvmi_msg_hdr *kvmi_msg_recv(struct kvmi *ikvm, bool *unsupported) static int kvmi_msg_dispatch_vm_cmd(struct kvmi *ikvm, const struct kvmi_msg_hdr *msg) { + trace_kvmi_vm_command(msg->id, msg->seq); + return msg_vm[msg->id](ikvm, msg, msg + 1); } @@ -895,6 +905,8 @@ static int kvmi_msg_dispatch_vcpu_job(struct kvmi *ikvm, struct kvm_vcpu *vcpu = NULL; int err; + trace_kvmi_vcpu_command(cmd->vcpu, hdr->id, hdr->seq); + if (invalid_vcpu_hdr(cmd)) return -KVM_EINVAL; @@ -1051,6 +1063,8 @@ int kvmi_send_event(struct kvm_vcpu *vcpu, u32 ev_id, ivcpu->reply.size = rpl_size; ivcpu->reply.error = -EINTR; + trace_kvmi_event(vcpu->vcpu_id, common.event, hdr.seq); + err = kvmi_sock_write(ikvm, vec, n, msg_size); if (err) goto out; @@ -1091,6 +1105,8 @@ int kvmi_msg_send_unhook(struct kvmi *ikvm) kvmi_setup_event_common(&common, KVMI_EVENT_UNHOOK, 0); + trace_kvmi_event(0, common.event, hdr.seq); + return kvmi_sock_write(ikvm, vec, n, msg_size); } From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= Subject: [RFC PATCH v6 77/92] kvm: introspection: add trace functions Date: Fri, 9 Aug 2019 19:00:32 +0300 Message-ID: <20190809160047.8319-78-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: kvm@vger.kernel.org Cc: Tamas K Lengyel , Weijiang Yang , Yu C , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Jan Kiszka , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Konrad Rzeszutek Wilk , Marian Rotariu , virtualization@lists.linux-foundation.org, =?UTF-8?q?Adalbert=20Laz=C4=83r?= , linux-mm@kvack.org, Patrick Colp , =?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= , Mathieu Tarral , Stefan Hajnoczi , =?UTF-8?q?Mircea=20C=C3=AErjaliu?= , Paolo Bonzini , Zhang@mail.linuxfoundation.org, =?UTF-8?q?Mihai=20Don=C8=9Bu?= List-Id: virtualization@lists.linuxfoundation.org Q28tZGV2ZWxvcGVkLWJ5OiBOaWN1yJlvciBDw67Im3UgPG5jaXR1QGJpdGRlZmVuZGVyLmNvbT4K U2lnbmVkLW9mZi1ieTogTmljdciZb3IgQ8OuyJt1IDxuY2l0dUBiaXRkZWZlbmRlci5jb20+CkNv LWRldmVsb3BlZC1ieTogTWlyY2VhIEPDrnJqYWxpdSA8bWNpcmphbGl1QGJpdGRlZmVuZGVyLmNv bT4KU2lnbmVkLW9mZi1ieTogTWlyY2VhIEPDrnJqYWxpdSA8bWNpcmphbGl1QGJpdGRlZmVuZGVy LmNvbT4KQ28tZGV2ZWxvcGVkLWJ5OiBNYXJpYW4gUm90YXJpdSA8bWFyaWFuLmMucm90YXJpdUBn bWFpbC5jb20+ClNpZ25lZC1vZmYtYnk6IE1hcmlhbiBSb3Rhcml1IDxtYXJpYW4uYy5yb3Rhcml1 QGdtYWlsLmNvbT4KQ28tZGV2ZWxvcGVkLWJ5OiBBZGFsYmVydCBMYXrEg3IgPGFsYXphckBiaXRk ZWZlbmRlci5jb20+ClNpZ25lZC1vZmYtYnk6IEFkYWxiZXJ0IExhesSDciA8YWxhemFyQGJpdGRl ZmVuZGVyLmNvbT4KLS0tCiBhcmNoL3g4Ni9rdm0va3ZtaS5jICAgICAgICAgfCAgNjMgKysrKwog aW5jbHVkZS90cmFjZS9ldmVudHMva3ZtaS5oIHwgNjgwICsrKysrKysrKysrKysrKysrKysrKysr KysrKysrKysrKysrKwogdmlydC9rdm0va3ZtaS5jICAgICAgICAgICAgIHwgIDIwICsrCiB2aXJ0 L2t2bS9rdm1pX21lbS5jICAgICAgICAgfCAgIDUgKwogdmlydC9rdm0va3ZtaV9tc2cuYyAgICAg ICAgIHwgIDE2ICsKIDUgZmlsZXMgY2hhbmdlZCwgNzg0IGluc2VydGlvbnMoKykKIGNyZWF0ZSBt b2RlIDEwMDY0NCBpbmNsdWRlL3RyYWNlL2V2ZW50cy9rdm1pLmgKCmRpZmYgLS1naXQgYS9hcmNo L3g4Ni9rdm0va3ZtaS5jIGIvYXJjaC94ODYva3ZtL2t2bWkuYwppbmRleCA1MzEyZjE3OWFmOWMu LjE3MWU3NjQ0OTI3MSAxMDA2NDQKLS0tIGEvYXJjaC94ODYva3ZtL2t2bWkuYworKysgYi9hcmNo L3g4Ni9rdm0va3ZtaS5jCkBAIC05LDYgKzksOCBAQAogI2luY2x1ZGUgPGFzbS92bXguaD4KICNp bmNsdWRlICIuLi8uLi8uLi92aXJ0L2t2bS9rdm1pX2ludC5oIgogCisjaW5jbHVkZSA8dHJhY2Uv ZXZlbnRzL2t2bWkuaD4KKwogc3RhdGljIHVuc2lnbmVkIGxvbmcgKm1zcl9tYXNrKHN0cnVjdCBr dm1fdmNwdSAqdmNwdSwgdW5zaWduZWQgaW50ICptc3IpCiB7CiAJc3dpdGNoICgqbXNyKSB7CkBA IC0xMDIsNiArMTA0LDkgQEAgc3RhdGljIGJvb2wgX19rdm1pX21zcl9ldmVudChzdHJ1Y3Qga3Zt X3ZjcHUgKnZjcHUsIHN0cnVjdCBtc3JfZGF0YSAqbXNyKQogCWlmIChvbGRfbXNyLmRhdGEgPT0g bXNyLT5kYXRhKQogCQlyZXR1cm4gdHJ1ZTsKIAorCXRyYWNlX2t2bWlfZXZlbnRfbXNyX3NlbmQo dmNwdS0+dmNwdV9pZCwgbXNyLT5pbmRleCwgb2xkX21zci5kYXRhLAorCQkJCSAgbXNyLT5kYXRh KTsKKwogCWFjdGlvbiA9IGt2bWlfc2VuZF9tc3IodmNwdSwgbXNyLT5pbmRleCwgb2xkX21zci5k YXRhLCBtc3ItPmRhdGEsCiAJCQkgICAgICAgJnJldF92YWx1ZSk7CiAJc3dpdGNoIChhY3Rpb24p IHsKQEAgLTExMyw2ICsxMTgsOCBAQCBzdGF0aWMgYm9vbCBfX2t2bWlfbXNyX2V2ZW50KHN0cnVj dCBrdm1fdmNwdSAqdmNwdSwgc3RydWN0IG1zcl9kYXRhICptc3IpCiAJCWt2bWlfaGFuZGxlX2Nv bW1vbl9ldmVudF9hY3Rpb25zKHZjcHUsIGFjdGlvbiwgIk1TUiIpOwogCX0KIAorCXRyYWNlX2t2 bWlfZXZlbnRfbXNyX3JlY3YodmNwdS0+dmNwdV9pZCwgYWN0aW9uLCByZXRfdmFsdWUpOworCiAJ cmV0dXJuIHJldDsKIH0KIApAQCAtMzg3LDYgKzM5NCw4IEBAIHN0YXRpYyBib29sIF9fa3ZtaV9j cl9ldmVudChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsIHVuc2lnbmVkIGludCBjciwKIAlpZiAoIXRl c3RfYml0KGNyLCBJVkNQVSh2Y3B1KS0+Y3JfbWFzaykpCiAJCXJldHVybiB0cnVlOwogCisJdHJh Y2Vfa3ZtaV9ldmVudF9jcl9zZW5kKHZjcHUtPnZjcHVfaWQsIGNyLCBvbGRfdmFsdWUsICpuZXdf dmFsdWUpOworCiAJYWN0aW9uID0ga3ZtaV9zZW5kX2NyKHZjcHUsIGNyLCBvbGRfdmFsdWUsICpu ZXdfdmFsdWUsICZyZXRfdmFsdWUpOwogCXN3aXRjaCAoYWN0aW9uKSB7CiAJY2FzZSBLVk1JX0VW RU5UX0FDVElPTl9DT05USU5VRToKQEAgLTM5Nyw2ICs0MDYsOCBAQCBzdGF0aWMgYm9vbCBfX2t2 bWlfY3JfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCB1bnNpZ25lZCBpbnQgY3IsCiAJCWt2 bWlfaGFuZGxlX2NvbW1vbl9ldmVudF9hY3Rpb25zKHZjcHUsIGFjdGlvbiwgIkNSIik7CiAJfQog CisJdHJhY2Vfa3ZtaV9ldmVudF9jcl9yZWN2KHZjcHUtPnZjcHVfaWQsIGFjdGlvbiwgcmV0X3Zh bHVlKTsKKwogCXJldHVybiByZXQ7CiB9CiAKQEAgLTQzNyw2ICs0NDgsOCBAQCBzdGF0aWMgdm9p ZCBfX2t2bWlfeHNldGJ2X2V2ZW50KHN0cnVjdCBrdm1fdmNwdSAqdmNwdSkKIHsKIAl1MzIgYWN0 aW9uOwogCisJdHJhY2Vfa3ZtaV9ldmVudF94c2V0YnZfc2VuZCh2Y3B1LT52Y3B1X2lkKTsKKwog CWFjdGlvbiA9IGt2bWlfc2VuZF94c2V0YnYodmNwdSk7CiAJc3dpdGNoIChhY3Rpb24pIHsKIAlj YXNlIEtWTUlfRVZFTlRfQUNUSU9OX0NPTlRJTlVFOgpAQCAtNDQ0LDYgKzQ1Nyw4IEBAIHN0YXRp YyB2b2lkIF9fa3ZtaV94c2V0YnZfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1KQogCWRlZmF1 bHQ6CiAJCWt2bWlfaGFuZGxlX2NvbW1vbl9ldmVudF9hY3Rpb25zKHZjcHUsIGFjdGlvbiwgIlhT RVRCViIpOwogCX0KKworCXRyYWNlX2t2bWlfZXZlbnRfeHNldGJ2X3JlY3YodmNwdS0+dmNwdV9p ZCwgYWN0aW9uKTsKIH0KIAogdm9pZCBrdm1pX3hzZXRidl9ldmVudChzdHJ1Y3Qga3ZtX3ZjcHUg KnZjcHUpCkBAIC00NjAsMTIgKzQ3NSwyNiBAQCB2b2lkIGt2bWlfeHNldGJ2X2V2ZW50KHN0cnVj dCBrdm1fdmNwdSAqdmNwdSkKIAlrdm1pX3B1dCh2Y3B1LT5rdm0pOwogfQogCitzdGF0aWMgdTY0 IGdldF9uZXh0X3JpcChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCit7CisJc3RydWN0IGt2bWlfdmNw dSAqaXZjcHUgPSBJVkNQVSh2Y3B1KTsKKworCWlmIChpdmNwdS0+aGF2ZV9kZWxheWVkX3JlZ3Mp CisJCXJldHVybiBpdmNwdS0+ZGVsYXllZF9yZWdzLnJpcDsKKwllbHNlCisJCXJldHVybiBrdm1f cmlwX3JlYWQodmNwdSk7Cit9CisKIHZvaWQga3ZtaV9hcmNoX2JyZWFrcG9pbnRfZXZlbnQoc3Ry dWN0IGt2bV92Y3B1ICp2Y3B1LCB1NjQgZ3ZhLCB1OCBpbnNuX2xlbikKIHsKIAl1MzIgYWN0aW9u OwogCXU2NCBncGE7CisJdTY0IG9sZF9yaXA7CiAKIAlncGEgPSBrdm1fbW11X2d2YV90b19ncGFf c3lzdGVtKHZjcHUsIGd2YSwgMCwgTlVMTCk7CisJb2xkX3JpcCA9IGt2bV9yaXBfcmVhZCh2Y3B1 KTsKKworCXRyYWNlX2t2bWlfZXZlbnRfYnBfc2VuZCh2Y3B1LT52Y3B1X2lkLCBncGEsIG9sZF9y aXApOwogCiAJYWN0aW9uID0ga3ZtaV9tc2dfc2VuZF9icCh2Y3B1LCBncGEsIGluc25fbGVuKTsK IAlzd2l0Y2ggKGFjdGlvbikgewpAQCAtNDc4LDYgKzUwNyw4IEBAIHZvaWQga3ZtaV9hcmNoX2Jy ZWFrcG9pbnRfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCB1NjQgZ3ZhLCB1OCBpbnNuX2xl bikKIAlkZWZhdWx0OgogCQlrdm1pX2hhbmRsZV9jb21tb25fZXZlbnRfYWN0aW9ucyh2Y3B1LCBh Y3Rpb24sICJCUCIpOwogCX0KKworCXRyYWNlX2t2bWlfZXZlbnRfYnBfcmVjdih2Y3B1LT52Y3B1 X2lkLCBhY3Rpb24sIGdldF9uZXh0X3JpcCh2Y3B1KSk7CiB9CiAKICNkZWZpbmUgS1ZNX0hDX1hF Tl9IVk1fT1BfR1VFU1RfUkVRVUVTVF9WTV9FVkVOVCAyNApAQCAtNTA0LDYgKzUzNSw4IEBAIHZv aWQga3ZtaV9hcmNoX2h5cGVyY2FsbF9ldmVudChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCiB7CiAJ dTMyIGFjdGlvbjsKIAorCXRyYWNlX2t2bWlfZXZlbnRfaGNfc2VuZCh2Y3B1LT52Y3B1X2lkKTsK KwogCWFjdGlvbiA9IGt2bWlfbXNnX3NlbmRfaHlwZXJjYWxsKHZjcHUpOwogCXN3aXRjaCAoYWN0 aW9uKSB7CiAJY2FzZSBLVk1JX0VWRU5UX0FDVElPTl9DT05USU5VRToKQEAgLTUxMSw2ICs1NDQs OCBAQCB2b2lkIGt2bWlfYXJjaF9oeXBlcmNhbGxfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1 KQogCWRlZmF1bHQ6CiAJCWt2bWlfaGFuZGxlX2NvbW1vbl9ldmVudF9hY3Rpb25zKHZjcHUsIGFj dGlvbiwgIkhZUEVSQ0FMTCIpOwogCX0KKworCXRyYWNlX2t2bWlfZXZlbnRfaGNfcmVjdih2Y3B1 LT52Y3B1X2lkLCBhY3Rpb24pOwogfQogCiBib29sIGt2bWlfYXJjaF9wZl9ldmVudChzdHJ1Y3Qg a3ZtX3ZjcHUgKnZjcHUsIGdwYV90IGdwYSwgZ3ZhX3QgZ3ZhLApAQCAtNTMyLDYgKzU2Nyw5IEBA IGJvb2wga3ZtaV9hcmNoX3BmX2V2ZW50KHN0cnVjdCBrdm1fdmNwdSAqdmNwdSwgZ3BhX3QgZ3Bh LCBndmFfdCBndmEsCiAJaWYgKGl2Y3B1LT5lZmZlY3RpdmVfcmVwX2NvbXBsZXRlKQogCQlyZXR1 cm4gdHJ1ZTsKIAorCXRyYWNlX2t2bWlfZXZlbnRfcGZfc2VuZCh2Y3B1LT52Y3B1X2lkLCBncGEs IGd2YSwgYWNjZXNzLAorCQkJCSBrdm1fcmlwX3JlYWQodmNwdSkpOworCiAJYWN0aW9uID0ga3Zt aV9tc2dfc2VuZF9wZih2Y3B1LCBncGEsIGd2YSwgYWNjZXNzLCAmaXZjcHUtPnNzX3JlcXVlc3Rl ZCwKIAkJCQkgICZpdmNwdS0+cmVwX2NvbXBsZXRlLCAmY3R4X2FkZHIsCiAJCQkJICBpdmNwdS0+ Y3R4X2RhdGEsICZjdHhfc2l6ZSk7CkBAIC01NTMsNiArNTkxLDkgQEAgYm9vbCBrdm1pX2FyY2hf cGZfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCBncGFfdCBncGEsIGd2YV90IGd2YSwKIAkJ a3ZtaV9oYW5kbGVfY29tbW9uX2V2ZW50X2FjdGlvbnModmNwdSwgYWN0aW9uLCAiUEYiKTsKIAl9 CiAKKwl0cmFjZV9rdm1pX2V2ZW50X3BmX3JlY3YodmNwdS0+dmNwdV9pZCwgYWN0aW9uLCBnZXRf bmV4dF9yaXAodmNwdSksCisJCQkJIGN0eF9zaXplLCBpdmNwdS0+c3NfcmVxdWVzdGVkLCByZXQp OworCiAJcmV0dXJuIHJldDsKIH0KIApAQCAtNjI4LDYgKzY2OSwxMSBAQCB2b2lkIGt2bWlfYXJj aF90cmFwX2V2ZW50KHN0cnVjdCBrdm1fdmNwdSAqdmNwdSkKIAkJZXJyID0gMDsKIAl9CiAKKwl0 cmFjZV9rdm1pX2V2ZW50X3RyYXBfc2VuZCh2Y3B1LT52Y3B1X2lkLCB2ZWN0b3IsCisJCQkJICAg SVZDUFUodmNwdSktPmV4Y2VwdGlvbi5uciwKKwkJCQkgICBlcnIsIElWQ1BVKHZjcHUpLT5leGNl cHRpb24uZXJyb3JfY29kZSwKKwkJCQkgICB2Y3B1LT5hcmNoLmNyMik7CisKIAlhY3Rpb24gPSBr dm1pX3NlbmRfdHJhcCh2Y3B1LCB2ZWN0b3IsIHR5cGUsIGVyciwgdmNwdS0+YXJjaC5jcjIpOwog CXN3aXRjaCAoYWN0aW9uKSB7CiAJY2FzZSBLVk1JX0VWRU5UX0FDVElPTl9DT05USU5VRToKQEAg LTYzNSw2ICs2ODEsOCBAQCB2b2lkIGt2bWlfYXJjaF90cmFwX2V2ZW50KHN0cnVjdCBrdm1fdmNw dSAqdmNwdSkKIAlkZWZhdWx0OgogCQlrdm1pX2hhbmRsZV9jb21tb25fZXZlbnRfYWN0aW9ucyh2 Y3B1LCBhY3Rpb24sICJUUkFQIik7CiAJfQorCisJdHJhY2Vfa3ZtaV9ldmVudF90cmFwX3JlY3Yo dmNwdS0+dmNwdV9pZCwgYWN0aW9uKTsKIH0KIAogc3RhdGljIGJvb2wgX19rdm1pX2Rlc2NyaXB0 b3JfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCB1OCBkZXNjcmlwdG9yLApAQCAtNjQzLDYg KzY5MSw4IEBAIHN0YXRpYyBib29sIF9fa3ZtaV9kZXNjcmlwdG9yX2V2ZW50KHN0cnVjdCBrdm1f dmNwdSAqdmNwdSwgdTggZGVzY3JpcHRvciwKIAl1MzIgYWN0aW9uOwogCWJvb2wgcmV0ID0gZmFs c2U7CiAKKwl0cmFjZV9rdm1pX2V2ZW50X2Rlc2Nfc2VuZCh2Y3B1LT52Y3B1X2lkLCBkZXNjcmlw dG9yLCB3cml0ZSk7CisKIAlhY3Rpb24gPSBrdm1pX21zZ19zZW5kX2Rlc2NyaXB0b3IodmNwdSwg ZGVzY3JpcHRvciwgd3JpdGUpOwogCXN3aXRjaCAoYWN0aW9uKSB7CiAJY2FzZSBLVk1JX0VWRU5U X0FDVElPTl9DT05USU5VRToKQEAgLTY1NCw2ICs3MDQsOCBAQCBzdGF0aWMgYm9vbCBfX2t2bWlf ZGVzY3JpcHRvcl9ldmVudChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsIHU4IGRlc2NyaXB0b3IsCiAJ CWt2bWlfaGFuZGxlX2NvbW1vbl9ldmVudF9hY3Rpb25zKHZjcHUsIGFjdGlvbiwgIkRFU0MiKTsK IAl9CiAKKwl0cmFjZV9rdm1pX2V2ZW50X2Rlc2NfcmVjdih2Y3B1LT52Y3B1X2lkLCBhY3Rpb24p OworCiAJcmV0dXJuIHJldDsKIH0KIApAQCAtNzE4LDYgKzc3MCwxNSBAQCBpbnQga3ZtaV9hcmNo X2NtZF9pbmplY3RfZXhjZXB0aW9uKHN0cnVjdCBrdm1fdmNwdSAqdmNwdSwgdTggdmVjdG9yLAog CQkJCSAgIGJvb2wgZXJyb3JfY29kZV92YWxpZCwKIAkJCQkgICB1MzIgZXJyb3JfY29kZSwgdTY0 IGFkZHJlc3MpCiB7CisJc3RydWN0IHg4Nl9leGNlcHRpb24gZSA9IHsKKwkJLmVycm9yX2NvZGVf dmFsaWQgPSBlcnJvcl9jb2RlX3ZhbGlkLAorCQkuZXJyb3JfY29kZSA9IGVycm9yX2NvZGUsCisJ CS5hZGRyZXNzID0gYWRkcmVzcywKKwkJLnZlY3RvciA9IHZlY3RvciwKKwl9OworCisJdHJhY2Vf a3ZtaV9jbWRfaW5qZWN0X2V4Y2VwdGlvbih2Y3B1LCAmZSk7CisKIAlpZiAoIShpc192ZWN0b3Jf dmFsaWQodmVjdG9yKSAmJiBpc19ndmFfdmFsaWQodmNwdSwgYWRkcmVzcykpKQogCQlyZXR1cm4g LUtWTV9FSU5WQUw7CiAKQEAgLTg3Niw2ICs5MzcsOCBAQCB2b2lkIGt2bWlfYXJjaF91cGRhdGVf cGFnZV90cmFja2luZyhzdHJ1Y3Qga3ZtICprdm0sCiAJCQlyZXR1cm47CiAJfQogCisJdHJhY2Vf a3ZtaV9zZXRfZ2ZuX2FjY2VzcyhtLT5nZm4sIG0tPmFjY2VzcywgbS0+d3JpdGVfYml0bWFwLCBz bG90LT5pZCk7CisKIAlmb3IgKGkgPSAwOyBpIDwgQVJSQVlfU0laRSh0cmFja19tb2Rlcyk7IGkr KykgewogCQl1bnNpZ25lZCBpbnQgYWxsb3dfYml0ID0gdHJhY2tfbW9kZXNbaV0uYWxsb3dfYml0 OwogCQllbnVtIGt2bV9wYWdlX3RyYWNrX21vZGUgbW9kZSA9IHRyYWNrX21vZGVzW2ldLnRyYWNr X21vZGU7CmRpZmYgLS1naXQgYS9pbmNsdWRlL3RyYWNlL2V2ZW50cy9rdm1pLmggYi9pbmNsdWRl L3RyYWNlL2V2ZW50cy9rdm1pLmgKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMDAw MDAwLi40NDIxODk0MzdmZTcKLS0tIC9kZXYvbnVsbAorKysgYi9pbmNsdWRlL3RyYWNlL2V2ZW50 cy9rdm1pLmgKQEAgLTAsMCArMSw2ODAgQEAKKy8qIFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBH UEwtMi4wICovCisjdW5kZWYgVFJBQ0VfU1lTVEVNCisjZGVmaW5lIFRSQUNFX1NZU1RFTSBrdm1p CisKKyNpZiAhZGVmaW5lZChfVFJBQ0VfS1ZNSV9IKSB8fCBkZWZpbmVkKFRSQUNFX0hFQURFUl9N VUxUSV9SRUFEKQorI2RlZmluZSBfVFJBQ0VfS1ZNSV9ICisKKyNpbmNsdWRlIDxsaW51eC90cmFj ZXBvaW50Lmg+CisKKyNpZm5kZWYgX19UUkFDRV9LVk1JX1NUUlVDVFVSRVMKKyNkZWZpbmUgX19U UkFDRV9LVk1JX1NUUlVDVFVSRVMKKworI3VuZGVmIEVOCisjZGVmaW5lIEVOKHgpIHsgeCwgI3gg fQorCitzdGF0aWMgY29uc3Qgc3RydWN0IHRyYWNlX3ByaW50X2ZsYWdzIGt2bWlfbXNnX2lkX3N5 bWJvbFtdID0geworCUVOKEtWTUlfR0VUX1ZFUlNJT04pLAorCUVOKEtWTUlfQ0hFQ0tfQ09NTUFO RCksCisJRU4oS1ZNSV9DSEVDS19FVkVOVCksCisJRU4oS1ZNSV9HRVRfR1VFU1RfSU5GTyksCisJ RU4oS1ZNSV9HRVRfVkNQVV9JTkZPKSwKKwlFTihLVk1JX0dFVF9SRUdJU1RFUlMpLAorCUVOKEtW TUlfU0VUX1JFR0lTVEVSUyksCisJRU4oS1ZNSV9HRVRfUEFHRV9BQ0NFU1MpLAorCUVOKEtWTUlf U0VUX1BBR0VfQUNDRVNTKSwKKwlFTihLVk1JX0dFVF9QQUdFX1dSSVRFX0JJVE1BUCksCisJRU4o S1ZNSV9TRVRfUEFHRV9XUklURV9CSVRNQVApLAorCUVOKEtWTUlfSU5KRUNUX0VYQ0VQVElPTiks CisJRU4oS1ZNSV9SRUFEX1BIWVNJQ0FMKSwKKwlFTihLVk1JX1dSSVRFX1BIWVNJQ0FMKSwKKwlF TihLVk1JX0dFVF9NQVBfVE9LRU4pLAorCUVOKEtWTUlfQ09OVFJPTF9FVkVOVFMpLAorCUVOKEtW TUlfQ09OVFJPTF9DUiksCisJRU4oS1ZNSV9DT05UUk9MX01TUiksCisJRU4oS1ZNSV9FVkVOVCks CisJRU4oS1ZNSV9FVkVOVF9SRVBMWSksCisJRU4oS1ZNSV9HRVRfQ1BVSUQpLAorCUVOKEtWTUlf R0VUX1hTQVZFKSwKKwlFTihLVk1JX1BBVVNFX1ZDUFUpLAorCUVOKEtWTUlfQ09OVFJPTF9WTV9F VkVOVFMpLAorCUVOKEtWTUlfR0VUX01UUlJfVFlQRSksCisJRU4oS1ZNSV9DT05UUk9MX1NQUCks CisJRU4oS1ZNSV9DT05UUk9MX0NNRF9SRVNQT05TRSksCisJey0xLCBOVUxMfQorfTsKKworc3Rh dGljIGNvbnN0IHN0cnVjdCB0cmFjZV9wcmludF9mbGFncyBrdm1pX2Rlc2NyaXB0b3Jfc3ltYm9s W10gPSB7CisJRU4oS1ZNSV9ERVNDX0lEVFIpLAorCUVOKEtWTUlfREVTQ19HRFRSKSwKKwlFTihL Vk1JX0RFU0NfTERUUiksCisJRU4oS1ZNSV9ERVNDX1RSKSwKKwl7LTEsIE5VTEx9Cit9OworCitz dGF0aWMgY29uc3Qgc3RydWN0IHRyYWNlX3ByaW50X2ZsYWdzIGt2bWlfZXZlbnRfc3ltYm9sW10g PSB7CisJRU4oS1ZNSV9FVkVOVF9VTkhPT0spLAorCUVOKEtWTUlfRVZFTlRfQ1IpLAorCUVOKEtW TUlfRVZFTlRfTVNSKSwKKwlFTihLVk1JX0VWRU5UX1hTRVRCViksCisJRU4oS1ZNSV9FVkVOVF9C UkVBS1BPSU5UKSwKKwlFTihLVk1JX0VWRU5UX0hZUEVSQ0FMTCksCisJRU4oS1ZNSV9FVkVOVF9Q RiksCisJRU4oS1ZNSV9FVkVOVF9UUkFQKSwKKwlFTihLVk1JX0VWRU5UX0RFU0NSSVBUT1IpLAor CUVOKEtWTUlfRVZFTlRfQ1JFQVRFX1ZDUFUpLAorCUVOKEtWTUlfRVZFTlRfUEFVU0VfVkNQVSks CisJRU4oS1ZNSV9FVkVOVF9TSU5HTEVTVEVQKSwKKwl7IC0xLCBOVUxMIH0KK307CisKK3N0YXRp YyBjb25zdCBzdHJ1Y3QgdHJhY2VfcHJpbnRfZmxhZ3Mga3ZtaV9hY3Rpb25fc3ltYm9sW10gPSB7 CisJe0tWTUlfRVZFTlRfQUNUSU9OX0NPTlRJTlVFLCAiY29udGludWUifSwKKwl7S1ZNSV9FVkVO VF9BQ1RJT05fUkVUUlksICJyZXRyeSJ9LAorCXtLVk1JX0VWRU5UX0FDVElPTl9DUkFTSCwgImNy YXNoIn0sCisJey0xLCBOVUxMfQorfTsKKworI2VuZGlmIC8qIF9fVFJBQ0VfS1ZNSV9TVFJVQ1RV UkVTICovCisKK1RSQUNFX0VWRU5UKAorCWt2bWlfdm1fY29tbWFuZCwKKwlUUF9QUk9UTyhfX3Ux NiBpZCwgX191MzIgc2VxKSwKKwlUUF9BUkdTKGlkLCBzZXEpLAorCVRQX1NUUlVDVF9fZW50cnko CisJCV9fZmllbGQoX191MTYsIGlkKQorCQlfX2ZpZWxkKF9fdTMyLCBzZXEpCisJKSwKKwlUUF9m YXN0X2Fzc2lnbigKKwkJX19lbnRyeS0+aWQgPSBpZDsKKwkJX19lbnRyeS0+c2VxID0gc2VxOwor CSksCisJVFBfcHJpbnRrKCIlcyBzZXEgJWQiLAorCQkgIHRyYWNlX3ByaW50X3N5bWJvbHNfc2Vx KHAsIF9fZW50cnktPmlkLCBrdm1pX21zZ19pZF9zeW1ib2wpLAorCQkgIF9fZW50cnktPnNlcSkK Kyk7CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfdm1fcmVwbHksCisJVFBfUFJPVE8oX191MTYgaWQs IF9fdTMyIHNlcSwgX19zMzIgZXJyKSwKKwlUUF9BUkdTKGlkLCBzZXEsIGVyciksCisJVFBfU1RS VUNUX19lbnRyeSgKKwkJX19maWVsZChfX3UxNiwgaWQpCisJCV9fZmllbGQoX191MzIsIHNlcSkK KwkJX19maWVsZChfX3MzMiwgZXJyKQorCSksCisJVFBfZmFzdF9hc3NpZ24oCisJCV9fZW50cnkt PmlkID0gaWQ7CisJCV9fZW50cnktPnNlcSA9IHNlcTsKKwkJX19lbnRyeS0+ZXJyID0gZXJyOwor CSksCisJVFBfcHJpbnRrKCIlcyBzZXEgJWQgZXJyICVkIiwKKwkJICB0cmFjZV9wcmludF9zeW1i b2xzX3NlcShwLCBfX2VudHJ5LT5pZCwga3ZtaV9tc2dfaWRfc3ltYm9sKSwKKwkJICBfX2VudHJ5 LT5zZXEsCisJCSAgX19lbnRyeS0+ZXJyKQorKTsKKworVFJBQ0VfRVZFTlQoCisJa3ZtaV92Y3B1 X2NvbW1hbmQsCisJVFBfUFJPVE8oX191MTYgdmNwdSwgX191MTYgaWQsIF9fdTMyIHNlcSksCisJ VFBfQVJHUyh2Y3B1LCBpZCwgc2VxKSwKKwlUUF9TVFJVQ1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9f dTE2LCB2Y3B1KQorCQlfX2ZpZWxkKF9fdTE2LCBpZCkKKwkJX19maWVsZChfX3UzMiwgc2VxKQor CSksCisJVFBfZmFzdF9hc3NpZ24oCisJCV9fZW50cnktPnZjcHUgPSB2Y3B1OworCQlfX2VudHJ5 LT5pZCA9IGlkOworCQlfX2VudHJ5LT5zZXEgPSBzZXE7CisJKSwKKwlUUF9wcmludGsoInZjcHUg JWQgJXMgc2VxICVkIiwKKwkJICBfX2VudHJ5LT52Y3B1LAorCQkgIHRyYWNlX3ByaW50X3N5bWJv bHNfc2VxKHAsIF9fZW50cnktPmlkLCBrdm1pX21zZ19pZF9zeW1ib2wpLAorCQkgIF9fZW50cnkt PnNlcSkKKyk7CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfdmNwdV9yZXBseSwKKwlUUF9QUk9UTyhf X3UxNiB2Y3B1LCBfX3UxNiBpZCwgX191MzIgc2VxLCBfX3MzMiBlcnIpLAorCVRQX0FSR1ModmNw dSwgaWQsIHNlcSwgZXJyKSwKKwlUUF9TVFJVQ1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9fdTE2LCB2 Y3B1KQorCQlfX2ZpZWxkKF9fdTE2LCBpZCkKKwkJX19maWVsZChfX3UzMiwgc2VxKQorCQlfX2Zp ZWxkKF9fczMyLCBlcnIpCisJKSwKKwlUUF9mYXN0X2Fzc2lnbigKKwkJX19lbnRyeS0+dmNwdSA9 IHZjcHU7CisJCV9fZW50cnktPmlkID0gaWQ7CisJCV9fZW50cnktPnNlcSA9IHNlcTsKKwkJX19l bnRyeS0+ZXJyID0gZXJyOworCSksCisJVFBfcHJpbnRrKCJ2Y3B1ICVkICVzIHNlcSAlZCBlcnIg JWQiLAorCQkgIF9fZW50cnktPnZjcHUsCisJCSAgdHJhY2VfcHJpbnRfc3ltYm9sc19zZXEocCwg X19lbnRyeS0+aWQsIGt2bWlfbXNnX2lkX3N5bWJvbCksCisJCSAgX19lbnRyeS0+c2VxLAorCQkg IF9fZW50cnktPmVycikKKyk7CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfZXZlbnQsCisJVFBfUFJP VE8oX191MTYgdmNwdSwgX191MzIgaWQsIF9fdTMyIHNlcSksCisJVFBfQVJHUyh2Y3B1LCBpZCwg c2VxKSwKKwlUUF9TVFJVQ1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9fdTE2LCB2Y3B1KQorCQlfX2Zp ZWxkKF9fdTMyLCBpZCkKKwkJX19maWVsZChfX3UzMiwgc2VxKQorCSksCisJVFBfZmFzdF9hc3Np Z24oCisJCV9fZW50cnktPnZjcHUgPSB2Y3B1OworCQlfX2VudHJ5LT5pZCA9IGlkOworCQlfX2Vu dHJ5LT5zZXEgPSBzZXE7CisJKSwKKwlUUF9wcmludGsoInZjcHUgJWQgJXMgc2VxICVkIiwKKwkJ X19lbnRyeS0+dmNwdSwKKwkJdHJhY2VfcHJpbnRfc3ltYm9sc19zZXEocCwgX19lbnRyeS0+aWQs IGt2bWlfZXZlbnRfc3ltYm9sKSwKKwkJX19lbnRyeS0+c2VxKQorKTsKKworVFJBQ0VfRVZFTlQo CisJa3ZtaV9ldmVudF9yZXBseSwKKwlUUF9QUk9UTyhfX3UzMiBpZCwgX191MzIgc2VxKSwKKwlU UF9BUkdTKGlkLCBzZXEpLAorCVRQX1NUUlVDVF9fZW50cnkoCisJCV9fZmllbGQoX191MzIsIGlk KQorCQlfX2ZpZWxkKF9fdTMyLCBzZXEpCisJKSwKKwlUUF9mYXN0X2Fzc2lnbigKKwkJX19lbnRy eS0+aWQgPSBpZDsKKwkJX19lbnRyeS0+c2VxID0gc2VxOworCSksCisJVFBfcHJpbnRrKCIlcyBz ZXEgJWQiLAorCQl0cmFjZV9wcmludF9zeW1ib2xzX3NlcShwLCBfX2VudHJ5LT5pZCwga3ZtaV9l dmVudF9zeW1ib2wpLAorCQlfX2VudHJ5LT5zZXEpCispOworCisjZGVmaW5lIEtWTUlfQUNDRVNT X1BSSU5USygpICh7CQkJCQkJXAorCWNvbnN0IGNoYXIgKnNhdmVkX3B0ciA9IHRyYWNlX3NlcV9i dWZmZXJfcHRyKHApOwkJXAorCXN0YXRpYyBjb25zdCBjaGFyICogY29uc3QgYWNjZXNzX3N0cltd ID0gewkJCVwKKwkJIi0tLSIsICJyLS0iLCAiLXctIiwgInJ3LSIsICItLXgiLCAici14IiwgIi13 eCIsICJyd3giCVwKKwl9OwkJCQkJCQkJXAorCXRyYWNlX3NlcV9wcmludGYocCwgIiVzIiwgYWNj ZXNzX3N0cltfX2VudHJ5LT5hY2Nlc3MgJiA3XSk7CVwKKwlzYXZlZF9wdHI7CQkJCQkJCVwKK30p CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfc2V0X2dmbl9hY2Nlc3MsCisJVFBfUFJPVE8oX191NjQg Z2ZuLCBfX3U4IGFjY2VzcywgX191MzIgYml0bWFwLCBfX3UxNiBzbG90KSwKKwlUUF9BUkdTKGdm biwgYWNjZXNzLCBiaXRtYXAsIHNsb3QpLAorCVRQX1NUUlVDVF9fZW50cnkoCisJCV9fZmllbGQo X191NjQsIGdmbikKKwkJX19maWVsZChfX3U4LCBhY2Nlc3MpCisJCV9fZmllbGQoX191MzIsIGJp dG1hcCkKKwkJX19maWVsZChfX3UxNiwgc2xvdCkKKwkpLAorCVRQX2Zhc3RfYXNzaWduKAorCQlf X2VudHJ5LT5nZm4gPSBnZm47CisJCV9fZW50cnktPmFjY2VzcyA9IGFjY2VzczsKKwkJX19lbnRy eS0+Yml0bWFwID0gYml0bWFwOworCQlfX2VudHJ5LT5zbG90ID0gc2xvdDsKKwkpLAorCVRQX3By aW50aygiZ2ZuICVsbHggJXMgd3JpdGUgYml0bWFwICV4IHNsb3QgJWQiLAorCQkgIF9fZW50cnkt PmdmbiwgS1ZNSV9BQ0NFU1NfUFJJTlRLKCksCisJCSAgX19lbnRyeS0+Yml0bWFwLCBfX2VudHJ5 LT5zbG90KQorKTsKKworREVDTEFSRV9FVkVOVF9DTEFTUygKKwlrdm1pX2V2ZW50X3NlbmRfdGVt cGxhdGUsCisJVFBfUFJPVE8oX191MTYgdmNwdSksCisJVFBfQVJHUyh2Y3B1KSwKKwlUUF9TVFJV Q1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9fdTE2LCB2Y3B1KQorCSksCisJVFBfZmFzdF9hc3NpZ24o CisJCV9fZW50cnktPnZjcHUgPSB2Y3B1OworCSksCisJVFBfcHJpbnRrKCJ2Y3B1ICVkIiwKKwkJ ICBfX2VudHJ5LT52Y3B1CisJKQorKTsKK0RFQ0xBUkVfRVZFTlRfQ0xBU1MoCisJa3ZtaV9ldmVu dF9yZWN2X3RlbXBsYXRlLAorCVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTMyIGFjdGlvbiksCisJ VFBfQVJHUyh2Y3B1LCBhY3Rpb24pLAorCVRQX1NUUlVDVF9fZW50cnkoCisJCV9fZmllbGQoX191 MTYsIHZjcHUpCisJCV9fZmllbGQoX191MzIsIGFjdGlvbikKKwkpLAorCVRQX2Zhc3RfYXNzaWdu KAorCQlfX2VudHJ5LT52Y3B1ID0gdmNwdTsKKwkJX19lbnRyeS0+YWN0aW9uID0gYWN0aW9uOwor CSksCisJVFBfcHJpbnRrKCJ2Y3B1ICVkICVzIiwKKwkJICBfX2VudHJ5LT52Y3B1LAorCQkgIHRy YWNlX3ByaW50X3N5bWJvbHNfc2VxKHAsIF9fZW50cnktPmFjdGlvbiwKKwkJCQkJICBrdm1pX2Fj dGlvbl9zeW1ib2wpCisJKQorKTsKKworVFJBQ0VfRVZFTlQoCisJa3ZtaV9ldmVudF9jcl9zZW5k LAorCVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTMyIGNyLCBfX3U2NCBvbGRfdmFsdWUsIF9fdTY0 IG5ld192YWx1ZSksCisJVFBfQVJHUyh2Y3B1LCBjciwgb2xkX3ZhbHVlLCBuZXdfdmFsdWUpLAor CVRQX1NUUlVDVF9fZW50cnkoCisJCV9fZmllbGQoX191MTYsIHZjcHUpCisJCV9fZmllbGQoX191 MzIsIGNyKQorCQlfX2ZpZWxkKF9fdTY0LCBvbGRfdmFsdWUpCisJCV9fZmllbGQoX191NjQsIG5l d192YWx1ZSkKKwkpLAorCVRQX2Zhc3RfYXNzaWduKAorCQlfX2VudHJ5LT52Y3B1ID0gdmNwdTsK KwkJX19lbnRyeS0+Y3IgPSBjcjsKKwkJX19lbnRyeS0+b2xkX3ZhbHVlID0gb2xkX3ZhbHVlOwor CQlfX2VudHJ5LT5uZXdfdmFsdWUgPSBuZXdfdmFsdWU7CisJKSwKKwlUUF9wcmludGsoInZjcHUg JWQgY3IgJXggb2xkX3ZhbHVlICVsbHggbmV3X3ZhbHVlICVsbHgiLAorCQkgIF9fZW50cnktPnZj cHUsCisJCSAgX19lbnRyeS0+Y3IsCisJCSAgX19lbnRyeS0+b2xkX3ZhbHVlLAorCQkgIF9fZW50 cnktPm5ld192YWx1ZQorCSkKKyk7CitUUkFDRV9FVkVOVCgKKwlrdm1pX2V2ZW50X2NyX3JlY3Ys CisJVFBfUFJPVE8oX191MTYgdmNwdSwgX191MzIgYWN0aW9uLCBfX3U2NCBuZXdfdmFsdWUpLAor CVRQX0FSR1ModmNwdSwgYWN0aW9uLCBuZXdfdmFsdWUpLAorCVRQX1NUUlVDVF9fZW50cnkoCisJ CV9fZmllbGQoX191MTYsIHZjcHUpCisJCV9fZmllbGQoX191MzIsIGFjdGlvbikKKwkJX19maWVs ZChfX3U2NCwgbmV3X3ZhbHVlKQorCSksCisJVFBfZmFzdF9hc3NpZ24oCisJCV9fZW50cnktPnZj cHUgPSB2Y3B1OworCQlfX2VudHJ5LT5hY3Rpb24gPSBhY3Rpb247CisJCV9fZW50cnktPm5ld192 YWx1ZSA9IG5ld192YWx1ZTsKKwkpLAorCVRQX3ByaW50aygidmNwdSAlZCAlcyBuZXdfdmFsdWUg JWxseCIsCisJCSAgX19lbnRyeS0+dmNwdSwKKwkJICB0cmFjZV9wcmludF9zeW1ib2xzX3NlcShw LCBfX2VudHJ5LT5hY3Rpb24sCisJCQkJCSAga3ZtaV9hY3Rpb25fc3ltYm9sKSwKKwkJICBfX2Vu dHJ5LT5uZXdfdmFsdWUKKwkpCispOworCitUUkFDRV9FVkVOVCgKKwlrdm1pX2V2ZW50X21zcl9z ZW5kLAorCVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTMyIG1zciwgX191NjQgb2xkX3ZhbHVlLCBf X3U2NCBuZXdfdmFsdWUpLAorCVRQX0FSR1ModmNwdSwgbXNyLCBvbGRfdmFsdWUsIG5ld192YWx1 ZSksCisJVFBfU1RSVUNUX19lbnRyeSgKKwkJX19maWVsZChfX3UxNiwgdmNwdSkKKwkJX19maWVs ZChfX3UzMiwgbXNyKQorCQlfX2ZpZWxkKF9fdTY0LCBvbGRfdmFsdWUpCisJCV9fZmllbGQoX191 NjQsIG5ld192YWx1ZSkKKwkpLAorCVRQX2Zhc3RfYXNzaWduKAorCQlfX2VudHJ5LT52Y3B1ID0g dmNwdTsKKwkJX19lbnRyeS0+bXNyID0gbXNyOworCQlfX2VudHJ5LT5vbGRfdmFsdWUgPSBvbGRf dmFsdWU7CisJCV9fZW50cnktPm5ld192YWx1ZSA9IG5ld192YWx1ZTsKKwkpLAorCVRQX3ByaW50 aygidmNwdSAlZCBtc3IgJXggb2xkX3ZhbHVlICVsbHggbmV3X3ZhbHVlICVsbHgiLAorCQkgIF9f ZW50cnktPnZjcHUsCisJCSAgX19lbnRyeS0+bXNyLAorCQkgIF9fZW50cnktPm9sZF92YWx1ZSwK KwkJICBfX2VudHJ5LT5uZXdfdmFsdWUKKwkpCispOworVFJBQ0VfRVZFTlQoCisJa3ZtaV9ldmVu dF9tc3JfcmVjdiwKKwlUUF9QUk9UTyhfX3UxNiB2Y3B1LCBfX3UzMiBhY3Rpb24sIF9fdTY0IG5l d192YWx1ZSksCisJVFBfQVJHUyh2Y3B1LCBhY3Rpb24sIG5ld192YWx1ZSksCisJVFBfU1RSVUNU X19lbnRyeSgKKwkJX19maWVsZChfX3UxNiwgdmNwdSkKKwkJX19maWVsZChfX3UzMiwgYWN0aW9u KQorCQlfX2ZpZWxkKF9fdTY0LCBuZXdfdmFsdWUpCisJKSwKKwlUUF9mYXN0X2Fzc2lnbigKKwkJ X19lbnRyeS0+dmNwdSA9IHZjcHU7CisJCV9fZW50cnktPmFjdGlvbiA9IGFjdGlvbjsKKwkJX19l bnRyeS0+bmV3X3ZhbHVlID0gbmV3X3ZhbHVlOworCSksCisJVFBfcHJpbnRrKCJ2Y3B1ICVkICVz IG5ld192YWx1ZSAlbGx4IiwKKwkJICBfX2VudHJ5LT52Y3B1LAorCQkgIHRyYWNlX3ByaW50X3N5 bWJvbHNfc2VxKHAsIF9fZW50cnktPmFjdGlvbiwKKwkJCQkJICBrdm1pX2FjdGlvbl9zeW1ib2wp LAorCQkgIF9fZW50cnktPm5ld192YWx1ZQorCSkKKyk7CisKK0RFRklORV9FVkVOVChrdm1pX2V2 ZW50X3NlbmRfdGVtcGxhdGUsIGt2bWlfZXZlbnRfeHNldGJ2X3NlbmQsCisJVFBfUFJPVE8oX191 MTYgdmNwdSksCisJVFBfQVJHUyh2Y3B1KQorKTsKK0RFRklORV9FVkVOVChrdm1pX2V2ZW50X3Jl Y3ZfdGVtcGxhdGUsIGt2bWlfZXZlbnRfeHNldGJ2X3JlY3YsCisJVFBfUFJPVE8oX191MTYgdmNw dSwgX191MzIgYWN0aW9uKSwKKwlUUF9BUkdTKHZjcHUsIGFjdGlvbikKKyk7CisKK1RSQUNFX0VW RU5UKAorCWt2bWlfZXZlbnRfYnBfc2VuZCwKKwlUUF9QUk9UTyhfX3UxNiB2Y3B1LCBfX3U2NCBn cGEsIF9fdTY0IG9sZF9yaXApLAorCVRQX0FSR1ModmNwdSwgZ3BhLCBvbGRfcmlwKSwKKwlUUF9T VFJVQ1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9fdTE2LCB2Y3B1KQorCQlfX2ZpZWxkKF9fdTY0LCBn cGEpCisJCV9fZmllbGQoX191NjQsIG9sZF9yaXApCisJKSwKKwlUUF9mYXN0X2Fzc2lnbigKKwkJ X19lbnRyeS0+dmNwdSA9IHZjcHU7CisJCV9fZW50cnktPmdwYSA9IGdwYTsKKwkJX19lbnRyeS0+ b2xkX3JpcCA9IG9sZF9yaXA7CisJKSwKKwlUUF9wcmludGsoInZjcHUgJWQgZ3BhICVsbHggcmlw ICVsbHgiLAorCQkgIF9fZW50cnktPnZjcHUsCisJCSAgX19lbnRyeS0+Z3BhLAorCQkgIF9fZW50 cnktPm9sZF9yaXAKKwkpCispOworVFJBQ0VfRVZFTlQoCisJa3ZtaV9ldmVudF9icF9yZWN2LAor CVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTMyIGFjdGlvbiwgX191NjQgbmV3X3JpcCksCisJVFBf QVJHUyh2Y3B1LCBhY3Rpb24sIG5ld19yaXApLAorCVRQX1NUUlVDVF9fZW50cnkoCisJCV9fZmll bGQoX191MTYsIHZjcHUpCisJCV9fZmllbGQoX191MzIsIGFjdGlvbikKKwkJX19maWVsZChfX3U2 NCwgbmV3X3JpcCkKKwkpLAorCVRQX2Zhc3RfYXNzaWduKAorCQlfX2VudHJ5LT52Y3B1ID0gdmNw dTsKKwkJX19lbnRyeS0+YWN0aW9uID0gYWN0aW9uOworCQlfX2VudHJ5LT5uZXdfcmlwID0gbmV3 X3JpcDsKKwkpLAorCVRQX3ByaW50aygidmNwdSAlZCAlcyByaXAgJWxseCIsCisJCSAgX19lbnRy eS0+dmNwdSwKKwkJICB0cmFjZV9wcmludF9zeW1ib2xzX3NlcShwLCBfX2VudHJ5LT5hY3Rpb24s CisJCQkJCSAga3ZtaV9hY3Rpb25fc3ltYm9sKSwKKwkJICBfX2VudHJ5LT5uZXdfcmlwCisJKQor KTsKKworREVGSU5FX0VWRU5UKGt2bWlfZXZlbnRfc2VuZF90ZW1wbGF0ZSwga3ZtaV9ldmVudF9o Y19zZW5kLAorCVRQX1BST1RPKF9fdTE2IHZjcHUpLAorCVRQX0FSR1ModmNwdSkKKyk7CitERUZJ TkVfRVZFTlQoa3ZtaV9ldmVudF9yZWN2X3RlbXBsYXRlLCBrdm1pX2V2ZW50X2hjX3JlY3YsCisJ VFBfUFJPVE8oX191MTYgdmNwdSwgX191MzIgYWN0aW9uKSwKKwlUUF9BUkdTKHZjcHUsIGFjdGlv bikKKyk7CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfZXZlbnRfcGZfc2VuZCwKKwlUUF9QUk9UTyhf X3UxNiB2Y3B1LCBfX3U2NCBncGEsIF9fdTY0IGd2YSwgX191OCBhY2Nlc3MsIF9fdTY0IHJpcCks CisJVFBfQVJHUyh2Y3B1LCBncGEsIGd2YSwgYWNjZXNzLCByaXApLAorCVRQX1NUUlVDVF9fZW50 cnkoCisJCV9fZmllbGQoX191MTYsIHZjcHUpCisJCV9fZmllbGQoX191NjQsIGdwYSkKKwkJX19m aWVsZChfX3U2NCwgZ3ZhKQorCQlfX2ZpZWxkKF9fdTgsIGFjY2VzcykKKwkJX19maWVsZChfX3U2 NCwgcmlwKQorCSksCisJVFBfZmFzdF9hc3NpZ24oCisJCV9fZW50cnktPnZjcHUgPSB2Y3B1Owor CQlfX2VudHJ5LT5ncGEgPSBncGE7CisJCV9fZW50cnktPmd2YSA9IGd2YTsKKwkJX19lbnRyeS0+ YWNjZXNzID0gYWNjZXNzOworCQlfX2VudHJ5LT5yaXAgPSByaXA7CisJKSwKKwlUUF9wcmludGso InZjcHUgJWQgZ3BhICVsbHggJXMgZ3ZhICVsbHggcmlwICVsbHgiLAorCQkgIF9fZW50cnktPnZj cHUsCisJCSAgX19lbnRyeS0+Z3BhLAorCQkgIEtWTUlfQUNDRVNTX1BSSU5USygpLAorCQkgIF9f ZW50cnktPmd2YSwKKwkJICBfX2VudHJ5LT5yaXAKKwkpCispOworVFJBQ0VfRVZFTlQoCisJa3Zt aV9ldmVudF9wZl9yZWN2LAorCVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTMyIGFjdGlvbiwgX191 NjQgbmV4dF9yaXAsIHNpemVfdCBjdXN0b21fZGF0YSwKKwkJIGJvb2wgc2luZ2xlc3RlcCwgYm9v bCByZXQpLAorCVRQX0FSR1ModmNwdSwgYWN0aW9uLCBuZXh0X3JpcCwgY3VzdG9tX2RhdGEsIHNp bmdsZXN0ZXAsIHJldCksCisJVFBfU1RSVUNUX19lbnRyeSgKKwkJX19maWVsZChfX3UxNiwgdmNw dSkKKwkJX19maWVsZChfX3UzMiwgYWN0aW9uKQorCQlfX2ZpZWxkKF9fdTY0LCBuZXh0X3JpcCkK KwkJX19maWVsZChzaXplX3QsIGN1c3RvbV9kYXRhKQorCQlfX2ZpZWxkKGJvb2wsIHNpbmdsZXN0 ZXApCisJCV9fZmllbGQoYm9vbCwgcmV0KQorCSksCisJVFBfZmFzdF9hc3NpZ24oCisJCV9fZW50 cnktPnZjcHUgPSB2Y3B1OworCQlfX2VudHJ5LT5hY3Rpb24gPSBhY3Rpb247CisJCV9fZW50cnkt Pm5leHRfcmlwID0gbmV4dF9yaXA7CisJCV9fZW50cnktPmN1c3RvbV9kYXRhID0gY3VzdG9tX2Rh dGE7CisJCV9fZW50cnktPnNpbmdsZXN0ZXAgPSBzaW5nbGVzdGVwOworCQlfX2VudHJ5LT5yZXQg PSByZXQ7CisJKSwKKwlUUF9wcmludGsoInZjcHUgJWQgJXMgcmlwICVsbHggY3VzdG9tICV6dSAl cyIsCisJCSAgX19lbnRyeS0+dmNwdSwKKwkJICB0cmFjZV9wcmludF9zeW1ib2xzX3NlcShwLCBf X2VudHJ5LT5hY3Rpb24sCisJCQkJCSAga3ZtaV9hY3Rpb25fc3ltYm9sKSwKKwkJICBfX2VudHJ5 LT5uZXh0X3JpcCwgX19lbnRyeS0+Y3VzdG9tX2RhdGEsCisJCSAgKF9fZW50cnktPnNpbmdsZXN0 ZXAgPyAoX19lbnRyeS0+cmV0ID8gInNpbmdsZXN0ZXAgZmFpbGVkIiA6CisJCQkJCQkJICJzaW5n bGVzdGVwIHJ1bm5pbmciKQorCQkJCQk6ICIiKQorCSkKKyk7CisKK1RSQUNFX0VWRU5UKAorCWt2 bWlfZXZlbnRfdHJhcF9zZW5kLAorCVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTMyIHZlY3Rvciwg X191OCBuciwgX191MzIgZXJyLCBfX3UxNiBlcnJvcl9jb2RlLAorCQkgX191NjQgY3IyKSwKKwlU UF9BUkdTKHZjcHUsIHZlY3RvciwgbnIsIGVyciwgZXJyb3JfY29kZSwgY3IyKSwKKwlUUF9TVFJV Q1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9fdTE2LCB2Y3B1KQorCQlfX2ZpZWxkKF9fdTMyLCB2ZWN0 b3IpCisJCV9fZmllbGQoX191OCwgbnIpCisJCV9fZmllbGQoX191MzIsIGVycikKKwkJX19maWVs ZChfX3UxNiwgZXJyb3JfY29kZSkKKwkJX19maWVsZChfX3U2NCwgY3IyKQorCSksCisJVFBfZmFz dF9hc3NpZ24oCisJCV9fZW50cnktPnZjcHUgPSB2Y3B1OworCQlfX2VudHJ5LT52ZWN0b3IgPSB2 ZWN0b3I7CisJCV9fZW50cnktPm5yID0gbnI7CisJCV9fZW50cnktPmVyciA9IGVycjsKKwkJX19l bnRyeS0+ZXJyb3JfY29kZSA9IGVycm9yX2NvZGU7CisJCV9fZW50cnktPmNyMiA9IGNyMjsKKwkp LAorCVRQX3ByaW50aygidmNwdSAlZCB2ZWN0b3IgJXgvJXggZXJyICV4LyV4IGFkZHJlc3MgJWxs eCIsCisJCSAgX19lbnRyeS0+dmNwdSwKKwkJICBfX2VudHJ5LT52ZWN0b3IsIF9fZW50cnktPm5y LAorCQkgIF9fZW50cnktPmVyciwgX19lbnRyeS0+ZXJyb3JfY29kZSwKKwkJICBfX2VudHJ5LT5j cjIKKwkpCispOworREVGSU5FX0VWRU5UKGt2bWlfZXZlbnRfcmVjdl90ZW1wbGF0ZSwga3ZtaV9l dmVudF90cmFwX3JlY3YsCisJVFBfUFJPVE8oX191MTYgdmNwdSwgX191MzIgYWN0aW9uKSwKKwlU UF9BUkdTKHZjcHUsIGFjdGlvbikKKyk7CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfZXZlbnRfZGVz Y19zZW5kLAorCVRQX1BST1RPKF9fdTE2IHZjcHUsIF9fdTggZGVzY3JpcHRvciwgX191OCB3cml0 ZSksCisJVFBfQVJHUyh2Y3B1LCBkZXNjcmlwdG9yLCB3cml0ZSksCisJVFBfU1RSVUNUX19lbnRy eSgKKwkJX19maWVsZChfX3UxNiwgdmNwdSkKKwkJX19maWVsZChfX3U4LCBkZXNjcmlwdG9yKQor CQlfX2ZpZWxkKF9fdTgsIHdyaXRlKQorCSksCisJVFBfZmFzdF9hc3NpZ24oCisJCV9fZW50cnkt PnZjcHUgPSB2Y3B1OworCQlfX2VudHJ5LT5kZXNjcmlwdG9yID0gZGVzY3JpcHRvcjsKKwkJX19l bnRyeS0+d3JpdGUgPSB3cml0ZTsKKwkpLAorCVRQX3ByaW50aygidmNwdSAlZCAlcyAlcyIsCisJ CSAgX19lbnRyeS0+dmNwdSwKKwkJICBfX2VudHJ5LT53cml0ZSA/ICJ3cml0ZSIgOiAicmVhZCIs CisJCSAgdHJhY2VfcHJpbnRfc3ltYm9sc19zZXEocCwgX19lbnRyeS0+ZGVzY3JpcHRvciwKKwkJ CQkJICBrdm1pX2Rlc2NyaXB0b3Jfc3ltYm9sKQorCSkKKyk7CitERUZJTkVfRVZFTlQoa3ZtaV9l dmVudF9yZWN2X3RlbXBsYXRlLCBrdm1pX2V2ZW50X2Rlc2NfcmVjdiwKKwlUUF9QUk9UTyhfX3Ux NiB2Y3B1LCBfX3UzMiBhY3Rpb24pLAorCVRQX0FSR1ModmNwdSwgYWN0aW9uKQorKTsKKworREVG SU5FX0VWRU5UKGt2bWlfZXZlbnRfc2VuZF90ZW1wbGF0ZSwga3ZtaV9ldmVudF9jcmVhdGVfdmNw dV9zZW5kLAorCVRQX1BST1RPKF9fdTE2IHZjcHUpLAorCVRQX0FSR1ModmNwdSkKKyk7CitERUZJ TkVfRVZFTlQoa3ZtaV9ldmVudF9yZWN2X3RlbXBsYXRlLCBrdm1pX2V2ZW50X2NyZWF0ZV92Y3B1 X3JlY3YsCisJVFBfUFJPVE8oX191MTYgdmNwdSwgX191MzIgYWN0aW9uKSwKKwlUUF9BUkdTKHZj cHUsIGFjdGlvbikKKyk7CisKK0RFRklORV9FVkVOVChrdm1pX2V2ZW50X3NlbmRfdGVtcGxhdGUs IGt2bWlfZXZlbnRfcGF1c2VfdmNwdV9zZW5kLAorCVRQX1BST1RPKF9fdTE2IHZjcHUpLAorCVRQ X0FSR1ModmNwdSkKKyk7CitERUZJTkVfRVZFTlQoa3ZtaV9ldmVudF9yZWN2X3RlbXBsYXRlLCBr dm1pX2V2ZW50X3BhdXNlX3ZjcHVfcmVjdiwKKwlUUF9QUk9UTyhfX3UxNiB2Y3B1LCBfX3UzMiBh Y3Rpb24pLAorCVRQX0FSR1ModmNwdSwgYWN0aW9uKQorKTsKKworREVGSU5FX0VWRU5UKGt2bWlf ZXZlbnRfc2VuZF90ZW1wbGF0ZSwga3ZtaV9ldmVudF9zaW5nbGVzdGVwX3NlbmQsCisJVFBfUFJP VE8oX191MTYgdmNwdSksCisJVFBfQVJHUyh2Y3B1KQorKTsKK0RFRklORV9FVkVOVChrdm1pX2V2 ZW50X3JlY3ZfdGVtcGxhdGUsIGt2bWlfZXZlbnRfc2luZ2xlc3RlcF9yZWN2LAorCVRQX1BST1RP KF9fdTE2IHZjcHUsIF9fdTMyIGFjdGlvbiksCisJVFBfQVJHUyh2Y3B1LCBhY3Rpb24pCispOwor CitUUkFDRV9FVkVOVCgKKwlrdm1pX3J1bl9zaW5nbGVzdGVwLAorCVRQX1BST1RPKHN0cnVjdCBr dm1fdmNwdSAqdmNwdSwgX191NjQgZ3BhLCBfX3U4IGFjY2VzcywgX191OCBsZXZlbCwKKwkJIHNp emVfdCBjdXN0b21fZGF0YSksCisJVFBfQVJHUyh2Y3B1LCBncGEsIGFjY2VzcywgbGV2ZWwsIGN1 c3RvbV9kYXRhKSwKKwlUUF9TVFJVQ1RfX2VudHJ5KAorCQlfX2ZpZWxkKF9fdTE2LCB2Y3B1X2lk KQorCQlfX2ZpZWxkKF9fdTY0LCBncGEpCisJCV9fZmllbGQoX191OCwgYWNjZXNzKQorCQlfX2Zp ZWxkKHNpemVfdCwgbGVuKQorCQlfX2FycmF5KF9fdTgsIGluc24sIDE1KQorCQlfX2ZpZWxkKF9f dTgsIGxldmVsKQorCQlfX2ZpZWxkKHNpemVfdCwgY3VzdG9tX2RhdGEpCisJKSwKKwlUUF9mYXN0 X2Fzc2lnbigKKwkJX19lbnRyeS0+dmNwdV9pZCA9IHZjcHUtPnZjcHVfaWQ7CisJCV9fZW50cnkt PmdwYSA9IGdwYTsKKwkJX19lbnRyeS0+YWNjZXNzID0gYWNjZXNzOworCQlfX2VudHJ5LT5sZW4g PSBtaW5fdChzaXplX3QsIDE1LAorCQkJCSAgICAgdmNwdS0+YXJjaC5lbXVsYXRlX2N0eHQuZmV0 Y2gucHRyCisJCQkJICAgICAtIHZjcHUtPmFyY2guZW11bGF0ZV9jdHh0LmZldGNoLmRhdGEpOwor CQltZW1jcHkoX19lbnRyeS0+aW5zbiwgdmNwdS0+YXJjaC5lbXVsYXRlX2N0eHQuZmV0Y2guZGF0 YSwgMTUpOworCQlfX2VudHJ5LT5sZXZlbCA9IGxldmVsOworCQlfX2VudHJ5LT5jdXN0b21fZGF0 YSA9IGN1c3RvbV9kYXRhOworCSksCisJVFBfcHJpbnRrKCJ2Y3B1ICVkIGdwYSAlbGx4ICVzIGlu c24gJXMgbGV2ZWwgJXggY3VzdG9tICV6dSIsCisJCSAgX19lbnRyeS0+dmNwdV9pZCwKKwkJICBf X2VudHJ5LT5ncGEsCisJCSAgS1ZNSV9BQ0NFU1NfUFJJTlRLKCksCisJCSAgX19wcmludF9oZXgo X19lbnRyeS0+aW5zbiwgX19lbnRyeS0+bGVuKSwKKwkJICBfX2VudHJ5LT5sZXZlbCwKKwkJICBf X2VudHJ5LT5jdXN0b21fZGF0YQorCSkKKyk7CisKK1RSQUNFX0VWRU5UKAorCWt2bWlfc3RvcF9z aW5nbGVzdGVwLAorCVRQX1BST1RPKF9fdTE2IHZjcHUpLAorCVRQX0FSR1ModmNwdSksCisJVFBf U1RSVUNUX19lbnRyeSgKKwkJX19maWVsZChfX3UxNiwgdmNwdSkKKwkpLAorCVRQX2Zhc3RfYXNz aWduKAorCQlfX2VudHJ5LT52Y3B1ID0gdmNwdTsKKwkpLAorCVRQX3ByaW50aygidmNwdSAlZCIs IF9fZW50cnktPnZjcHUKKwkpCispOworCitUUkFDRV9FVkVOVCgKKwlrdm1pX21lbV9tYXAsCisJ VFBfUFJPVE8oc3RydWN0IGt2bSAqa3ZtLCBncGFfdCByZXFfZ3BhLCBncGFfdCBtYXBfZ3BhKSwK KwlUUF9BUkdTKGt2bSwgcmVxX2dwYSwgbWFwX2dwYSksCisJVFBfU1RSVUNUX19lbnRyeSgKKwkJ X19maWVsZF9zdHJ1Y3QodXVpZF90LCB1dWlkKQorCQlfX2ZpZWxkKGdwYV90LCByZXFfZ3BhKQor CQlfX2ZpZWxkKGdwYV90LCBtYXBfZ3BhKQorCSksCisJVFBfZmFzdF9hc3NpZ24oCisJCXN0cnVj dCBrdm1pICppa3ZtID0ga3ZtaV9nZXQoa3ZtKTsKKworCQlpZiAoaWt2bSkgeworCQkJbWVtY3B5 KCZfX2VudHJ5LT51dWlkLCAmaWt2bS0+dXVpZCwgc2l6ZW9mKHV1aWRfdCkpOworCQkJa3ZtaV9w dXQoa3ZtKTsKKwkJfSBlbHNlCisJCQltZW1zZXQoJl9fZW50cnktPnV1aWQsIDAsIHNpemVvZih1 dWlkX3QpKTsKKwkJX19lbnRyeS0+cmVxX2dwYSA9IHJlcV9ncGE7CisJCV9fZW50cnktPm1hcF9n cGEgPSBtYXBfZ3BhOworCSksCisJVFBfcHJpbnRrKCJ2bSAlcFUgcmVxX2dwYSAlbGx4IG1hcF9n cGEgJWxseCIsCisJCSZfX2VudHJ5LT51dWlkLAorCQlfX2VudHJ5LT5yZXFfZ3BhLAorCQlfX2Vu dHJ5LT5tYXBfZ3BhCisJKQorKTsKKworVFJBQ0VfRVZFTlQoCisJa3ZtaV9tZW1fdW5tYXAsCisJ VFBfUFJPVE8oZ3BhX3QgbWFwX2dwYSksCisJVFBfQVJHUyhtYXBfZ3BhKSwKKwlUUF9TVFJVQ1Rf X2VudHJ5KAorCQlfX2ZpZWxkKGdwYV90LCBtYXBfZ3BhKQorCSksCisJVFBfZmFzdF9hc3NpZ24o CisJCV9fZW50cnktPm1hcF9ncGEgPSBtYXBfZ3BhOworCSksCisJVFBfcHJpbnRrKCJtYXBfZ3Bh ICVsbHgiLAorCQlfX2VudHJ5LT5tYXBfZ3BhCisJKQorKTsKKworI2RlZmluZSBFWFMoeCkgeyB4 IyNfVkVDVE9SLCAiIyIgI3ggfQorCisjZGVmaW5lIGt2bV90cmFjZV9zeW1fZXhjCQkJCQkJXAor CUVYUyhERSksIEVYUyhEQiksIEVYUyhCUCksIEVYUyhPRiksIEVYUyhCUiksIEVYUyhVRCksIEVY UyhOTSksCVwKKwlFWFMoREYpLCBFWFMoVFMpLCBFWFMoTlApLCBFWFMoU1MpLCBFWFMoR1ApLCBF WFMoUEYpLAkJXAorCUVYUyhNRiksIEVYUyhBQyksIEVYUyhNQykKKworVFJBQ0VfRVZFTlQoCisJ a3ZtaV9jbWRfaW5qZWN0X2V4Y2VwdGlvbiwKKwlUUF9QUk9UTyhzdHJ1Y3Qga3ZtX3ZjcHUgKnZj cHUsIHN0cnVjdCB4ODZfZXhjZXB0aW9uICpmYXVsdCksCisJVFBfQVJHUyh2Y3B1LCBmYXVsdCks CisJVFBfU1RSVUNUX19lbnRyeSgKKwkJX19maWVsZChfX3UxNiwgdmNwdV9pZCkKKwkJX19maWVs ZChfX3U4LCB2ZWN0b3IpCisJCV9fZmllbGQoX191NjQsIGFkZHJlc3MpCisJCV9fZmllbGQoX191 MTYsIGVycm9yX2NvZGUpCisJCV9fZmllbGQoYm9vbCwgZXJyb3JfY29kZV92YWxpZCkKKwkpLAor CVRQX2Zhc3RfYXNzaWduKAorCQlfX2VudHJ5LT52Y3B1X2lkID0gdmNwdS0+dmNwdV9pZDsKKwkJ X19lbnRyeS0+dmVjdG9yID0gZmF1bHQtPnZlY3RvcjsKKwkJX19lbnRyeS0+YWRkcmVzcyA9IGZh dWx0LT5hZGRyZXNzOworCQlfX2VudHJ5LT5lcnJvcl9jb2RlID0gZmF1bHQtPmVycm9yX2NvZGU7 CisJCV9fZW50cnktPmVycm9yX2NvZGVfdmFsaWQgPSBmYXVsdC0+ZXJyb3JfY29kZV92YWxpZDsK KwkpLAorCVRQX3ByaW50aygidmNwdSAlZCAlcyBhZGRyZXNzICVsbHggZXJyb3IgJXgiLAorCQkg IF9fZW50cnktPnZjcHVfaWQsCisJCSAgX19wcmludF9zeW1ib2xpYyhfX2VudHJ5LT52ZWN0b3Is IGt2bV90cmFjZV9zeW1fZXhjKSwKKwkJICBfX2VudHJ5LT52ZWN0b3IgPT0gUEZfVkVDVE9SID8g X19lbnRyeS0+YWRkcmVzcyA6IDAsCisJCSAgX19lbnRyeS0+ZXJyb3JfY29kZV92YWxpZCA/IF9f ZW50cnktPmVycm9yX2NvZGUgOiAwCisJKQorKTsKKworI2VuZGlmIC8qIF9UUkFDRV9LVk1JX0gg Ki8KKworI2luY2x1ZGUgPHRyYWNlL2RlZmluZV90cmFjZS5oPgpkaWZmIC0tZ2l0IGEvdmlydC9r dm0va3ZtaS5jIGIvdmlydC9rdm0va3ZtaS5jCmluZGV4IDE1N2YzYTQwMWQ2NC4uY2UyOGNhOGM4 ZDc3IDEwMDY0NAotLS0gYS92aXJ0L2t2bS9rdm1pLmMKKysrIGIvdmlydC9rdm0va3ZtaS5jCkBA IC0xMiw2ICsxMiw5IEBACiAjaW5jbHVkZSA8bGludXgvYml0bWFwLmg+CiAjaW5jbHVkZSA8bGlu dXgvcmVtb3RlX21hcHBpbmcuaD4KIAorI2RlZmluZSBDUkVBVEVfVFJBQ0VfUE9JTlRTCisjaW5j bHVkZSA8dHJhY2UvZXZlbnRzL2t2bWkuaD4KKwogI2RlZmluZSBNQVhfUEFVU0VfUkVRVUVTVFMg MTAwMQogCiBzdGF0aWMgc3RydWN0IGttZW1fY2FjaGUgKm1zZ19jYWNoZTsKQEAgLTEyODQsNiAr MTI4Nyw4IEBAIHN0YXRpYyB2b2lkIF9fa3ZtaV9zaW5nbGVzdGVwX2V2ZW50KHN0cnVjdCBrdm1f dmNwdSAqdmNwdSkKIHsKIAl1MzIgYWN0aW9uOwogCisJdHJhY2Vfa3ZtaV9ldmVudF9zaW5nbGVz dGVwX3NlbmQodmNwdS0+dmNwdV9pZCk7CisKIAlhY3Rpb24gPSBrdm1pX3NlbmRfc2luZ2xlc3Rl cCh2Y3B1KTsKIAlzd2l0Y2ggKGFjdGlvbikgewogCWNhc2UgS1ZNSV9FVkVOVF9BQ1RJT05fQ09O VElOVUU6CkBAIC0xMjkxLDYgKzEyOTYsOCBAQCBzdGF0aWMgdm9pZCBfX2t2bWlfc2luZ2xlc3Rl cF9ldmVudChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCiAJZGVmYXVsdDoKIAkJa3ZtaV9oYW5kbGVf Y29tbW9uX2V2ZW50X2FjdGlvbnModmNwdSwgYWN0aW9uLCAiU0lOR0xFU1RFUCIpOwogCX0KKwor CXRyYWNlX2t2bWlfZXZlbnRfc2luZ2xlc3RlcF9yZWN2KHZjcHUtPnZjcHVfaWQsIGFjdGlvbik7 CiB9CiAKIHN0YXRpYyB2b2lkIGt2bWlfc2luZ2xlc3RlcF9ldmVudChzdHJ1Y3Qga3ZtX3ZjcHUg KnZjcHUpCkBAIC0xMzExLDYgKzEzMTgsOCBAQCBzdGF0aWMgYm9vbCBfX2t2bWlfY3JlYXRlX3Zj cHVfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1KQogCXUzMiBhY3Rpb247CiAJYm9vbCByZXQg PSBmYWxzZTsKIAorCXRyYWNlX2t2bWlfZXZlbnRfY3JlYXRlX3ZjcHVfc2VuZCh2Y3B1LT52Y3B1 X2lkKTsKKwogCWFjdGlvbiA9IGt2bWlfbXNnX3NlbmRfY3JlYXRlX3ZjcHUodmNwdSk7CiAJc3dp dGNoIChhY3Rpb24pIHsKIAljYXNlIEtWTUlfRVZFTlRfQUNUSU9OX0NPTlRJTlVFOgpAQCAtMTMy MCw2ICsxMzI5LDggQEAgc3RhdGljIGJvb2wgX19rdm1pX2NyZWF0ZV92Y3B1X2V2ZW50KHN0cnVj dCBrdm1fdmNwdSAqdmNwdSkKIAkJa3ZtaV9oYW5kbGVfY29tbW9uX2V2ZW50X2FjdGlvbnModmNw dSwgYWN0aW9uLCAiQ1JFQVRFIik7CiAJfQogCisJdHJhY2Vfa3ZtaV9ldmVudF9jcmVhdGVfdmNw dV9yZWN2KHZjcHUtPnZjcHVfaWQsIGFjdGlvbik7CisKIAlyZXR1cm4gcmV0OwogfQogCkBAIC0x MzQ1LDYgKzEzNTYsOCBAQCBzdGF0aWMgYm9vbCBfX2t2bWlfcGF1c2VfdmNwdV9ldmVudChzdHJ1 Y3Qga3ZtX3ZjcHUgKnZjcHUpCiAJdTMyIGFjdGlvbjsKIAlib29sIHJldCA9IGZhbHNlOwogCisJ dHJhY2Vfa3ZtaV9ldmVudF9wYXVzZV92Y3B1X3NlbmQodmNwdS0+dmNwdV9pZCk7CisKIAlhY3Rp b24gPSBrdm1pX21zZ19zZW5kX3BhdXNlX3ZjcHUodmNwdSk7CiAJc3dpdGNoIChhY3Rpb24pIHsK IAljYXNlIEtWTUlfRVZFTlRfQUNUSU9OX0NPTlRJTlVFOgpAQCAtMTM1NCw2ICsxMzY3LDggQEAg c3RhdGljIGJvb2wgX19rdm1pX3BhdXNlX3ZjcHVfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1 KQogCQlrdm1pX2hhbmRsZV9jb21tb25fZXZlbnRfYWN0aW9ucyh2Y3B1LCBhY3Rpb24sICJQQVVT RSIpOwogCX0KIAorCXRyYWNlX2t2bWlfZXZlbnRfcGF1c2VfdmNwdV9yZWN2KHZjcHUtPnZjcHVf aWQsIGFjdGlvbik7CisKIAlyZXR1cm4gcmV0OwogfQogCkBAIC0xODU3LDYgKzE4NzIsOCBAQCB2 b2lkIGt2bWlfc3RvcF9zcyhzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCiAKIAlpdmNwdS0+c3Nfb3du ZXIgPSBmYWxzZTsKIAorCXRyYWNlX2t2bWlfc3RvcF9zaW5nbGVzdGVwKHZjcHUtPnZjcHVfaWQp OworCiAJa3ZtaV9zaW5nbGVzdGVwX2V2ZW50KHZjcHUpOwogCiBvdXQ6CkBAIC0xODkyLDYgKzE5 MDksOSBAQCBzdGF0aWMgYm9vbCBrdm1pX3J1bl9zcyhzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsIGdw YV90IGdwYSwgdTggYWNjZXNzKQogCWdmbl90IGdmbiA9IGdwYV90b19nZm4oZ3BhKTsKIAlpbnQg ZXJyOwogCisJdHJhY2Vfa3ZtaV9ydW5fc2luZ2xlc3RlcCh2Y3B1LCBncGEsIGFjY2VzcywgaWt2 bS0+c3NfbGV2ZWwsCisJCQkJICBJVkNQVSh2Y3B1KS0+Y3R4X3NpemUpOworCiAJa3ZtaV9hcmNo X3N0YXJ0X3NpbmdsZV9zdGVwKHZjcHUpOwogCiAJZXJyID0gd3JpdGVfY3VzdG9tX2RhdGEodmNw dSk7CmRpZmYgLS1naXQgYS92aXJ0L2t2bS9rdm1pX21lbS5jIGIvdmlydC9rdm0va3ZtaV9tZW0u YwppbmRleCA2MjQ0YWRkNjAwNjIuLmE3YTAxNjQ2ZWE1YyAxMDA2NDQKLS0tIGEvdmlydC9rdm0v a3ZtaV9tZW0uYworKysgYi92aXJ0L2t2bS9rdm1pX21lbS5jCkBAIC0yMyw2ICsyMyw3IEBACiAj aW5jbHVkZSA8bGludXgvcmVtb3RlX21hcHBpbmcuaD4KIAogI2luY2x1ZGUgPHVhcGkvbGludXgv a3ZtaS5oPgorI2luY2x1ZGUgPHRyYWNlL2V2ZW50cy9rdm1pLmg+CiAKICNpbmNsdWRlICJrdm1p X2ludC5oIgogCkBAIC0yMjEsNiArMjIyLDggQEAgaW50IGt2bWlfaG9zdF9tZW1fbWFwKHN0cnVj dCBrdm1fdmNwdSAqdmNwdSwgZ3ZhX3QgdGtuX2d2YSwKIAl9CiAJcmVxX21tID0gdGFyZ2V0X2t2 bS0+bW07CiAKKwl0cmFjZV9rdm1pX21lbV9tYXAodGFyZ2V0X2t2bSwgcmVxX2dwYSwgbWFwX2dw YSk7CisKIAkvKiB0cmFuc2xhdGUgc291cmNlIGFkZHJlc3NlcyAqLwogCXJlcV9nZm4gPSBncGFf dG9fZ2ZuKHJlcV9ncGEpOwogCXJlcV9odmEgPSBnZm5fdG9faHZhX3NhZmUodGFyZ2V0X2t2bSwg cmVxX2dmbik7CkBAIC0yNzQsNiArMjc3LDggQEAgaW50IGt2bWlfaG9zdF9tZW1fdW5tYXAoc3Ry dWN0IGt2bV92Y3B1ICp2Y3B1LCBncGFfdCBtYXBfZ3BhKQogCiAJa3ZtX2RlYnVnKCJrdm1pOiB1 bm1hcHBpbmcgcmVxdWVzdCBmb3IgbWFwX2dwYSAlMDE2bGx4XG4iLCBtYXBfZ3BhKTsKIAorCXRy YWNlX2t2bWlfbWVtX3VubWFwKG1hcF9ncGEpOworCiAJLyogY29udmVydCBHUEEgLT4gSFZBICov CiAJbWFwX2dmbiA9IGdwYV90b19nZm4obWFwX2dwYSk7CiAJbWFwX2h2YSA9IGdmbl90b19odmFf c2FmZSh2Y3B1LT5rdm0sIG1hcF9nZm4pOwpkaWZmIC0tZ2l0IGEvdmlydC9rdm0va3ZtaV9tc2cu YyBiL3ZpcnQva3ZtL2t2bWlfbXNnLmMKaW5kZXggYTVmODdhYWZhMjM3Li5iZGIxZTYwOTA2Zjkg MTAwNjQ0Ci0tLSBhL3ZpcnQva3ZtL2t2bWlfbXNnLmMKKysrIGIvdmlydC9rdm0va3ZtaV9tc2cu YwpAQCAtOCw2ICs4LDggQEAKICNpbmNsdWRlIDxsaW51eC9uZXQuaD4KICNpbmNsdWRlICJrdm1p X2ludC5oIgogCisjaW5jbHVkZSA8dHJhY2UvZXZlbnRzL2t2bWkuaD4KKwogdHlwZWRlZiBpbnQg KCp2Y3B1X3JlcGx5X2ZjdCkoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LAogCQkJICAgICAgY29uc3Qg c3RydWN0IGt2bWlfbXNnX2hkciAqbXNnLCBpbnQgZXJyLAogCQkJICAgICAgY29uc3Qgdm9pZCAq cnBsLCBzaXplX3QgcnBsX3NpemUpOwpAQCAtMTY1LDYgKzE2Nyw4IEBAIHN0YXRpYyBpbnQga3Zt aV9tc2dfdm1fcmVwbHkoc3RydWN0IGt2bWkgKmlrdm0sCiAJCQkgICAgIGNvbnN0IHN0cnVjdCBr dm1pX21zZ19oZHIgKm1zZywgaW50IGVyciwKIAkJCSAgICAgY29uc3Qgdm9pZCAqcnBsLCBzaXpl X3QgcnBsX3NpemUpCiB7CisJdHJhY2Vfa3ZtaV92bV9yZXBseShtc2ctPmlkLCBtc2ctPnNlcSwg ZXJyKTsKKwogCXJldHVybiBrdm1pX21zZ19yZXBseShpa3ZtLCBtc2csIGVyciwgcnBsLCBycGxf c2l6ZSk7CiB9CiAKQEAgLTIwMiw2ICsyMDYsOCBAQCBpbnQga3ZtaV9tc2dfdmNwdV9yZXBseShz dHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsCiAJCQljb25zdCBzdHJ1Y3Qga3ZtaV9tc2dfaGRyICptc2cs IGludCBlcnIsCiAJCQljb25zdCB2b2lkICpycGwsIHNpemVfdCBycGxfc2l6ZSkKIHsKKwl0cmFj ZV9rdm1pX3ZjcHVfcmVwbHkodmNwdS0+dmNwdV9pZCwgbXNnLT5pZCwgbXNnLT5zZXEsIGVycik7 CisKIAlyZXR1cm4ga3ZtaV9tc2dfcmVwbHkoSUtWTSh2Y3B1LT5rdm0pLCBtc2csIGVyciwgcnBs LCBycGxfc2l6ZSk7CiB9CiAKQEAgLTU1OSw2ICs1NjUsOCBAQCBzdGF0aWMgaW50IGhhbmRsZV9l dmVudF9yZXBseShzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsCiAJc3RydWN0IGt2bWlfdmNwdV9yZXBs eSAqZXhwZWN0ZWQgPSAmaXZjcHUtPnJlcGx5OwogCXNpemVfdCB1c2VmdWwsIHJlY2VpdmVkLCBj b21tb247CiAKKwl0cmFjZV9rdm1pX2V2ZW50X3JlcGx5KHJlcGx5LT5ldmVudCwgbXNnLT5zZXEp OworCiAJaWYgKHVubGlrZWx5KG1zZy0+c2VxICE9IGV4cGVjdGVkLT5zZXEpKQogCQlnb3RvIG91 dDsKIApAQCAtODgzLDYgKzg5MSw4IEBAIHN0YXRpYyBzdHJ1Y3Qga3ZtaV9tc2dfaGRyICprdm1p X21zZ19yZWN2KHN0cnVjdCBrdm1pICppa3ZtLCBib29sICp1bnN1cHBvcnRlZCkKIHN0YXRpYyBp bnQga3ZtaV9tc2dfZGlzcGF0Y2hfdm1fY21kKHN0cnVjdCBrdm1pICppa3ZtLAogCQkJCSAgICBj b25zdCBzdHJ1Y3Qga3ZtaV9tc2dfaGRyICptc2cpCiB7CisJdHJhY2Vfa3ZtaV92bV9jb21tYW5k KG1zZy0+aWQsIG1zZy0+c2VxKTsKKwogCXJldHVybiBtc2dfdm1bbXNnLT5pZF0oaWt2bSwgbXNn LCBtc2cgKyAxKTsKIH0KIApAQCAtODk1LDYgKzkwNSw4IEBAIHN0YXRpYyBpbnQga3ZtaV9tc2df ZGlzcGF0Y2hfdmNwdV9qb2Ioc3RydWN0IGt2bWkgKmlrdm0sCiAJc3RydWN0IGt2bV92Y3B1ICp2 Y3B1ID0gTlVMTDsKIAlpbnQgZXJyOwogCisJdHJhY2Vfa3ZtaV92Y3B1X2NvbW1hbmQoY21kLT52 Y3B1LCBoZHItPmlkLCBoZHItPnNlcSk7CisKIAlpZiAoaW52YWxpZF92Y3B1X2hkcihjbWQpKQog CQlyZXR1cm4gLUtWTV9FSU5WQUw7CiAKQEAgLTEwNTEsNiArMTA2Myw4IEBAIGludCBrdm1pX3Nl bmRfZXZlbnQoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCB1MzIgZXZfaWQsCiAJaXZjcHUtPnJlcGx5 LnNpemUgPSBycGxfc2l6ZTsKIAlpdmNwdS0+cmVwbHkuZXJyb3IgPSAtRUlOVFI7CiAKKwl0cmFj ZV9rdm1pX2V2ZW50KHZjcHUtPnZjcHVfaWQsIGNvbW1vbi5ldmVudCwgaGRyLnNlcSk7CisKIAll cnIgPSBrdm1pX3NvY2tfd3JpdGUoaWt2bSwgdmVjLCBuLCBtc2dfc2l6ZSk7CiAJaWYgKGVycikK IAkJZ290byBvdXQ7CkBAIC0xMDkxLDYgKzExMDUsOCBAQCBpbnQga3ZtaV9tc2dfc2VuZF91bmhv b2soc3RydWN0IGt2bWkgKmlrdm0pCiAKIAlrdm1pX3NldHVwX2V2ZW50X2NvbW1vbigmY29tbW9u LCBLVk1JX0VWRU5UX1VOSE9PSywgMCk7CiAKKwl0cmFjZV9rdm1pX2V2ZW50KDAsIGNvbW1vbi5l dmVudCwgaGRyLnNlcSk7CisKIAlyZXR1cm4ga3ZtaV9zb2NrX3dyaXRlKGlrdm0sIHZlYywgbiwg bXNnX3NpemUpOwogfQogCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fClZpcnR1YWxpemF0aW9uIG1haWxpbmcgbGlzdApWaXJ0dWFsaXphdGlvbkBsaXN0cy5s aW51eC1mb3VuZGF0aW9uLm9yZwpodHRwczovL2xpc3RzLmxpbnV4Zm91bmRhdGlvbi5vcmcvbWFp bG1hbi9saXN0aW5mby92aXJ0dWFsaXphdGlvbg==