From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D59B0CA9EAC for ; Sat, 19 Oct 2019 11:12:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 97307222CD for ; Sat, 19 Oct 2019 11:12:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571483552; bh=5dSbHnVNRGsCg2ZMW+GxQtzEq0l/u/gxli9abl8ooes=; h=Date:From:To:Cc:Subject:In-Reply-To:References:List-ID:From; b=IRWjW0bJsSa8CqGPlakER0Ub+tCov/iMfIpyDKwEVLnh/o/Im3voksULqfY5jF76d VxEQ5vu6oTZSXYNiCpgs/+RHXL+za27XAMBb9xmVcf0qfIKD2iJdEaTBDa+pIrAh52 FtpQ+WDHie44sKcx7xoJ2o1XW57gVWT6Jwf1XHSc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726164AbfJSLMb (ORCPT ); Sat, 19 Oct 2019 07:12:31 -0400 Received: from inca-roads.misterjones.org ([213.251.177.50]:55272 "EHLO inca-roads.misterjones.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726129AbfJSLMa (ORCPT ); Sat, 19 Oct 2019 07:12:30 -0400 Received: from [185.104.136.17] (helo=big-swifty.misterjones.org) by cheepnis.misterjones.org with esmtpsa (TLSv1.2:AES256-GCM-SHA384:256) (Exim 4.80) (envelope-from ) id 1iLmef-0006rT-EH; Sat, 19 Oct 2019 13:12:17 +0200 Date: Sat, 19 Oct 2019 12:12:15 +0100 Message-ID: <86eez9yoog.wl-maz@kernel.org> From: Marc Zyngier To: Steven Price Cc: Will Deacon , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, Catalin Marinas , Paolo Bonzini , Radim =?UTF-8?B?S3LEjW3DocWZ?= , Russell King , James Morse , Julien Thierry , Suzuki K Pouloze , Mark Rutland , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v6 05/10] KVM: arm64: Support stolen time reporting via shared structure In-Reply-To: <20191011125930.40834-6-steven.price@arm.com> References: <20191011125930.40834-1-steven.price@arm.com> <20191011125930.40834-6-steven.price@arm.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/26 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.104.136.17 X-SA-Exim-Rcpt-To: steven.price@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, catalin.marinas@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, mark.rutland@arm.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 11 Oct 2019 13:59:25 +0100, Steven Price wrote: > > Implement the service call for configuring a shared structure between a > VCPU and the hypervisor in which the hypervisor can write the time > stolen from the VCPU's execution time by other tasks on the host. > > User space allocates memory which is placed at an IPA also chosen by user > space. The hypervisor then updates the shared structure using > kvm_put_guest() to ensure single copy atomicity of the 64-bit value > reporting the stolen time in nanoseconds. > > Whenever stolen time is enabled by the guest, the stolen time counter is > reset. > > The stolen time itself is retrieved from the sched_info structure > maintained by the Linux scheduler code. We enable SCHEDSTATS when > selecting KVM Kconfig to ensure this value is meaningful. > > Signed-off-by: Steven Price > --- > arch/arm/include/asm/kvm_host.h | 20 +++++++++++ > arch/arm64/include/asm/kvm_host.h | 21 +++++++++++- > arch/arm64/kvm/Kconfig | 1 + > include/linux/kvm_types.h | 2 ++ > virt/kvm/arm/arm.c | 11 ++++++ > virt/kvm/arm/hypercalls.c | 3 ++ > virt/kvm/arm/pvtime.c | 56 +++++++++++++++++++++++++++++++ > 7 files changed, 113 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h > index 5a0c3569ebde..5c401482d62d 100644 > --- a/arch/arm/include/asm/kvm_host.h > +++ b/arch/arm/include/asm/kvm_host.h > @@ -39,6 +39,7 @@ > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) > #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) > +#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > @@ -329,6 +330,25 @@ static inline long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > return SMCCC_RET_NOT_SUPPORTED; > } > > +static inline long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu) > +{ > + return SMCCC_RET_NOT_SUPPORTED; > +} > + > +static inline int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init) > +{ > + return -ENOTSUPP; > +} > + > +static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) > +{ > +} > + > +static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch) > +{ > + return false; > +} > + > void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); > > struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 93b46d9526d0..1697e63f6dd8 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -44,6 +44,7 @@ > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) > #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) > +#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > @@ -338,8 +339,14 @@ struct kvm_vcpu_arch { > /* True when deferrable sysregs are loaded on the physical CPU, > * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ > bool sysregs_loaded_on_cpu; > -}; > > + /* Guest PV state */ > + struct { > + u64 steal; > + u64 last_steal; > + gpa_t base; > + } steal; > +}; nit: Please keep an empty line at the end of the structure. > /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ > #define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ > sve_ffr_offset((vcpu)->arch.sve_max_vl))) > @@ -479,6 +486,18 @@ int kvm_perf_init(void); > int kvm_perf_teardown(void); > > long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu); > +long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu); > +int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init); > + > +static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) > +{ > + vcpu_arch->steal.base = GPA_INVALID; > +} > + > +static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch) > +{ > + return (vcpu_arch->steal.base != GPA_INVALID); > +} > > void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); > > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig > index a67121d419a2..d8b88e40d223 100644 > --- a/arch/arm64/kvm/Kconfig > +++ b/arch/arm64/kvm/Kconfig > @@ -39,6 +39,7 @@ config KVM > select IRQ_BYPASS_MANAGER > select HAVE_KVM_IRQ_BYPASS > select HAVE_KVM_VCPU_RUN_PID_CHANGE > + select SCHEDSTATS > ---help--- > Support hosting virtualized guest machines. > We don't support KVM with 16K page tables yet, due to the multiple > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h > index bde5374ae021..1c88e69db3d9 100644 > --- a/include/linux/kvm_types.h > +++ b/include/linux/kvm_types.h > @@ -35,6 +35,8 @@ typedef unsigned long gva_t; > typedef u64 gpa_t; > typedef u64 gfn_t; > > +#define GPA_INVALID (~(gpa_t)0) > + > typedef unsigned long hva_t; > typedef u64 hpa_t; > typedef u64 hfn_t; > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c > index 86c6aa1cb58e..5d3059aeadb1 100644 > --- a/virt/kvm/arm/arm.c > +++ b/virt/kvm/arm/arm.c > @@ -40,6 +40,10 @@ > #include > #include > > +#include > +#include > +#include > + > #ifdef REQUIRES_VIRT > __asm__(".arch_extension virt"); > #endif > @@ -351,6 +355,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) > > kvm_arm_reset_debug_ptr(vcpu); > > + kvm_arm_pvtime_vcpu_init(&vcpu->arch); > + > return kvm_vgic_vcpu_init(vcpu); > } > > @@ -380,6 +386,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > kvm_vcpu_load_sysregs(vcpu); > kvm_arch_vcpu_load_fp(vcpu); > kvm_vcpu_pmu_restore_guest(vcpu); > + if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) > + kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); > > if (single_task_running()) > vcpu_clear_wfe_traps(vcpu); > @@ -645,6 +653,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) > * that a VCPU sees new virtual interrupts. > */ > kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu); > + > + if (kvm_check_request(KVM_REQ_RECORD_STEAL, vcpu)) > + kvm_update_stolen_time(vcpu, false); > } > } > > diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c > index 97ea8b133e77..5c333a64390e 100644 > --- a/virt/kvm/arm/hypercalls.c > +++ b/virt/kvm/arm/hypercalls.c > @@ -56,6 +56,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) > case ARM_SMCCC_HV_PV_TIME_FEATURES: > val = kvm_hypercall_pv_features(vcpu); > break; > + case ARM_SMCCC_HV_PV_TIME_ST: > + val = kvm_hypercall_stolen_time(vcpu); > + break; > default: > return kvm_psci_call(vcpu); > } > diff --git a/virt/kvm/arm/pvtime.c b/virt/kvm/arm/pvtime.c > index 8d0fad671dcf..a90f1b4ebd13 100644 > --- a/virt/kvm/arm/pvtime.c > +++ b/virt/kvm/arm/pvtime.c > @@ -3,8 +3,45 @@ > > #include > > +#include > + > #include > > +int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init) > +{ > + struct kvm *kvm = vcpu->kvm; > + u64 steal; > + u64 steal_le; This should be __le64. > + u64 offset; > + int idx; > + u64 base = vcpu->arch.steal.base; > + > + if (base == GPA_INVALID) > + return -ENOTSUPP; > + > + /* Let's do the local bookkeeping */ > + steal = vcpu->arch.steal.steal; > + steal += current->sched_info.run_delay - vcpu->arch.steal.last_steal; > + vcpu->arch.steal.last_steal = current->sched_info.run_delay; > + vcpu->arch.steal.steal = steal; > + > + steal_le = cpu_to_le64(steal); > + idx = srcu_read_lock(&kvm->srcu); > + if (init) { > + struct pvclock_vcpu_stolen_time init_values = { > + .revision = 0, > + .attributes = 0 nit: 0 is the default initialiser. > + }; > + kvm_write_guest(kvm, base, &init_values, > + sizeof(init_values)); > + } I'm not convinced by this init phase right in the middle of the normal path. It looks ugly, and it'd be better if moved out of line. I'd suggest: static void kvm_init_stolen_time(struct kvm_vcpu *vcpu) { struct pvclock_vcpu_stolen_time init_values = { }; vcpu->arch.steal.steal = 0; vcpu->arch.steal.last_steal = current->sched_info.run_delay; idx = srcu_read_lock(&kvm->srcu); kvm_write_guest(kvm, base, &init_values, sizeof(init_values)); srcu_read_unlock(&kvm->srcu, idx); } and change the two callers accordingly. Or even better, move this code to the hypercall handling function, because that's where it actually belongs. > + offset = offsetof(struct pvclock_vcpu_stolen_time, stolen_time); > + kvm_put_guest(kvm, base + offset, steal_le, u64); > + srcu_read_unlock(&kvm->srcu, idx); > + > + return 0; > +} > + > long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > { > u32 feature = smccc_get_arg1(vcpu); > @@ -12,6 +49,7 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > > switch (feature) { > case ARM_SMCCC_HV_PV_TIME_FEATURES: > + case ARM_SMCCC_HV_PV_TIME_ST: > val = SMCCC_RET_SUCCESS; > break; > } > @@ -19,3 +57,21 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > return val; > } > > +long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu) Why long? If that's a base address, then it is either a phys_addr_t or a gpa_t. I'd suggest you move the error check to the caller. > +{ > + int err; > + > + /* > + * Start counting stolen time from the time the guest requests > + * the feature enabled. > + */ > + vcpu->arch.steal.steal = 0; > + vcpu->arch.steal.last_steal = current->sched_info.run_delay; > + > + err = kvm_update_stolen_time(vcpu, true); > + > + if (err) > + return SMCCC_RET_NOT_SUPPORTED; > + > + return vcpu->arch.steal.base; > +} > -- > 2.20.1 > > Thanks, M. -- Jazz is not dead, it just smells funny. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5986CA9EAC for ; Sat, 19 Oct 2019 11:12:27 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 3A3B9222CC for ; Sat, 19 Oct 2019 11:12:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A3B9222CC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 930634A973; Sat, 19 Oct 2019 07:12:26 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xK6gWEgmKCJA; Sat, 19 Oct 2019 07:12:25 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3CCCA4A99D; Sat, 19 Oct 2019 07:12:25 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 1EFA54A973 for ; Sat, 19 Oct 2019 07:12:24 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5JSmkqNg5nw0 for ; Sat, 19 Oct 2019 07:12:22 -0400 (EDT) Received: from inca-roads.misterjones.org (inca-roads.misterjones.org [213.251.177.50]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 9C5494A95C for ; Sat, 19 Oct 2019 07:12:22 -0400 (EDT) Received: from [185.104.136.17] (helo=big-swifty.misterjones.org) by cheepnis.misterjones.org with esmtpsa (TLSv1.2:AES256-GCM-SHA384:256) (Exim 4.80) (envelope-from ) id 1iLmef-0006rT-EH; Sat, 19 Oct 2019 13:12:17 +0200 Date: Sat, 19 Oct 2019 12:12:15 +0100 Message-ID: <86eez9yoog.wl-maz@kernel.org> From: Marc Zyngier To: Steven Price Subject: Re: [PATCH v6 05/10] KVM: arm64: Support stolen time reporting via shared structure In-Reply-To: <20191011125930.40834-6-steven.price@arm.com> References: <20191011125930.40834-1-steven.price@arm.com> <20191011125930.40834-6-steven.price@arm.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/26 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.104.136.17 X-SA-Exim-Rcpt-To: steven.price@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, catalin.marinas@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, mark.rutland@arm.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false Cc: kvm@vger.kernel.org, Catalin Marinas , linux-doc@vger.kernel.org, Russell King , linux-kernel@vger.kernel.org, Paolo Bonzini , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Fri, 11 Oct 2019 13:59:25 +0100, Steven Price wrote: > > Implement the service call for configuring a shared structure between a > VCPU and the hypervisor in which the hypervisor can write the time > stolen from the VCPU's execution time by other tasks on the host. > > User space allocates memory which is placed at an IPA also chosen by user > space. The hypervisor then updates the shared structure using > kvm_put_guest() to ensure single copy atomicity of the 64-bit value > reporting the stolen time in nanoseconds. > > Whenever stolen time is enabled by the guest, the stolen time counter is > reset. > > The stolen time itself is retrieved from the sched_info structure > maintained by the Linux scheduler code. We enable SCHEDSTATS when > selecting KVM Kconfig to ensure this value is meaningful. > > Signed-off-by: Steven Price > --- > arch/arm/include/asm/kvm_host.h | 20 +++++++++++ > arch/arm64/include/asm/kvm_host.h | 21 +++++++++++- > arch/arm64/kvm/Kconfig | 1 + > include/linux/kvm_types.h | 2 ++ > virt/kvm/arm/arm.c | 11 ++++++ > virt/kvm/arm/hypercalls.c | 3 ++ > virt/kvm/arm/pvtime.c | 56 +++++++++++++++++++++++++++++++ > 7 files changed, 113 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h > index 5a0c3569ebde..5c401482d62d 100644 > --- a/arch/arm/include/asm/kvm_host.h > +++ b/arch/arm/include/asm/kvm_host.h > @@ -39,6 +39,7 @@ > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) > #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) > +#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > @@ -329,6 +330,25 @@ static inline long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > return SMCCC_RET_NOT_SUPPORTED; > } > > +static inline long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu) > +{ > + return SMCCC_RET_NOT_SUPPORTED; > +} > + > +static inline int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init) > +{ > + return -ENOTSUPP; > +} > + > +static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) > +{ > +} > + > +static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch) > +{ > + return false; > +} > + > void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); > > struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 93b46d9526d0..1697e63f6dd8 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -44,6 +44,7 @@ > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) > #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) > +#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > @@ -338,8 +339,14 @@ struct kvm_vcpu_arch { > /* True when deferrable sysregs are loaded on the physical CPU, > * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ > bool sysregs_loaded_on_cpu; > -}; > > + /* Guest PV state */ > + struct { > + u64 steal; > + u64 last_steal; > + gpa_t base; > + } steal; > +}; nit: Please keep an empty line at the end of the structure. > /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ > #define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ > sve_ffr_offset((vcpu)->arch.sve_max_vl))) > @@ -479,6 +486,18 @@ int kvm_perf_init(void); > int kvm_perf_teardown(void); > > long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu); > +long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu); > +int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init); > + > +static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) > +{ > + vcpu_arch->steal.base = GPA_INVALID; > +} > + > +static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch) > +{ > + return (vcpu_arch->steal.base != GPA_INVALID); > +} > > void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); > > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig > index a67121d419a2..d8b88e40d223 100644 > --- a/arch/arm64/kvm/Kconfig > +++ b/arch/arm64/kvm/Kconfig > @@ -39,6 +39,7 @@ config KVM > select IRQ_BYPASS_MANAGER > select HAVE_KVM_IRQ_BYPASS > select HAVE_KVM_VCPU_RUN_PID_CHANGE > + select SCHEDSTATS > ---help--- > Support hosting virtualized guest machines. > We don't support KVM with 16K page tables yet, due to the multiple > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h > index bde5374ae021..1c88e69db3d9 100644 > --- a/include/linux/kvm_types.h > +++ b/include/linux/kvm_types.h > @@ -35,6 +35,8 @@ typedef unsigned long gva_t; > typedef u64 gpa_t; > typedef u64 gfn_t; > > +#define GPA_INVALID (~(gpa_t)0) > + > typedef unsigned long hva_t; > typedef u64 hpa_t; > typedef u64 hfn_t; > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c > index 86c6aa1cb58e..5d3059aeadb1 100644 > --- a/virt/kvm/arm/arm.c > +++ b/virt/kvm/arm/arm.c > @@ -40,6 +40,10 @@ > #include > #include > > +#include > +#include > +#include > + > #ifdef REQUIRES_VIRT > __asm__(".arch_extension virt"); > #endif > @@ -351,6 +355,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) > > kvm_arm_reset_debug_ptr(vcpu); > > + kvm_arm_pvtime_vcpu_init(&vcpu->arch); > + > return kvm_vgic_vcpu_init(vcpu); > } > > @@ -380,6 +386,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > kvm_vcpu_load_sysregs(vcpu); > kvm_arch_vcpu_load_fp(vcpu); > kvm_vcpu_pmu_restore_guest(vcpu); > + if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) > + kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); > > if (single_task_running()) > vcpu_clear_wfe_traps(vcpu); > @@ -645,6 +653,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) > * that a VCPU sees new virtual interrupts. > */ > kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu); > + > + if (kvm_check_request(KVM_REQ_RECORD_STEAL, vcpu)) > + kvm_update_stolen_time(vcpu, false); > } > } > > diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c > index 97ea8b133e77..5c333a64390e 100644 > --- a/virt/kvm/arm/hypercalls.c > +++ b/virt/kvm/arm/hypercalls.c > @@ -56,6 +56,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) > case ARM_SMCCC_HV_PV_TIME_FEATURES: > val = kvm_hypercall_pv_features(vcpu); > break; > + case ARM_SMCCC_HV_PV_TIME_ST: > + val = kvm_hypercall_stolen_time(vcpu); > + break; > default: > return kvm_psci_call(vcpu); > } > diff --git a/virt/kvm/arm/pvtime.c b/virt/kvm/arm/pvtime.c > index 8d0fad671dcf..a90f1b4ebd13 100644 > --- a/virt/kvm/arm/pvtime.c > +++ b/virt/kvm/arm/pvtime.c > @@ -3,8 +3,45 @@ > > #include > > +#include > + > #include > > +int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init) > +{ > + struct kvm *kvm = vcpu->kvm; > + u64 steal; > + u64 steal_le; This should be __le64. > + u64 offset; > + int idx; > + u64 base = vcpu->arch.steal.base; > + > + if (base == GPA_INVALID) > + return -ENOTSUPP; > + > + /* Let's do the local bookkeeping */ > + steal = vcpu->arch.steal.steal; > + steal += current->sched_info.run_delay - vcpu->arch.steal.last_steal; > + vcpu->arch.steal.last_steal = current->sched_info.run_delay; > + vcpu->arch.steal.steal = steal; > + > + steal_le = cpu_to_le64(steal); > + idx = srcu_read_lock(&kvm->srcu); > + if (init) { > + struct pvclock_vcpu_stolen_time init_values = { > + .revision = 0, > + .attributes = 0 nit: 0 is the default initialiser. > + }; > + kvm_write_guest(kvm, base, &init_values, > + sizeof(init_values)); > + } I'm not convinced by this init phase right in the middle of the normal path. It looks ugly, and it'd be better if moved out of line. I'd suggest: static void kvm_init_stolen_time(struct kvm_vcpu *vcpu) { struct pvclock_vcpu_stolen_time init_values = { }; vcpu->arch.steal.steal = 0; vcpu->arch.steal.last_steal = current->sched_info.run_delay; idx = srcu_read_lock(&kvm->srcu); kvm_write_guest(kvm, base, &init_values, sizeof(init_values)); srcu_read_unlock(&kvm->srcu, idx); } and change the two callers accordingly. Or even better, move this code to the hypercall handling function, because that's where it actually belongs. > + offset = offsetof(struct pvclock_vcpu_stolen_time, stolen_time); > + kvm_put_guest(kvm, base + offset, steal_le, u64); > + srcu_read_unlock(&kvm->srcu, idx); > + > + return 0; > +} > + > long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > { > u32 feature = smccc_get_arg1(vcpu); > @@ -12,6 +49,7 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > > switch (feature) { > case ARM_SMCCC_HV_PV_TIME_FEATURES: > + case ARM_SMCCC_HV_PV_TIME_ST: > val = SMCCC_RET_SUCCESS; > break; > } > @@ -19,3 +57,21 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > return val; > } > > +long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu) Why long? If that's a base address, then it is either a phys_addr_t or a gpa_t. I'd suggest you move the error check to the caller. > +{ > + int err; > + > + /* > + * Start counting stolen time from the time the guest requests > + * the feature enabled. > + */ > + vcpu->arch.steal.steal = 0; > + vcpu->arch.steal.last_steal = current->sched_info.run_delay; > + > + err = kvm_update_stolen_time(vcpu, true); > + > + if (err) > + return SMCCC_RET_NOT_SUPPORTED; > + > + return vcpu->arch.steal.base; > +} > -- > 2.20.1 > > Thanks, M. -- Jazz is not dead, it just smells funny. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C75B9CA9EAB for ; Sat, 19 Oct 2019 11:12:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9B35B222D1 for ; Sat, 19 Oct 2019 11:12:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="P7gPy8i+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B35B222D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xGRo91dSxRSiDXyxo4/2EZPhsWypgZquKS+oKP/uAi4=; b=P7gPy8i+2v+JGe KS1iNyUbYyhIMujt9RKUTKTRp/6Rb/X7ggR3ywYORmLza76zxPmN86NF337GXmJvV3tFvwupCqoWm 5NT+FXsAJ9YoPCyfi9LZOS1OKtKzejX11YEv5HqIcQGQkR4LqdysE3/QbYwx+JMY6DiAQqsjhGoSy kAXGlNC3qHKTTopZ7D6lLCeUG1u2KyBsNK9vSW8o6HzrUxwtGwHOh7o4JQSFHgfH4zCKw98U7xS6Q tcAdlrg/yYDzW1VHGTecQW9/N8wGoB1o3khsMHsL09BgN7VWo5ViSyE4PQEEJ1fARIjXNI6IDHSH2 xLyLegHE45zOY/y+bN0Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLmeo-0003uG-GW; Sat, 19 Oct 2019 11:12:26 +0000 Received: from inca-roads.misterjones.org ([213.251.177.50]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLmej-0003ta-US for linux-arm-kernel@lists.infradead.org; Sat, 19 Oct 2019 11:12:23 +0000 Received: from [185.104.136.17] (helo=big-swifty.misterjones.org) by cheepnis.misterjones.org with esmtpsa (TLSv1.2:AES256-GCM-SHA384:256) (Exim 4.80) (envelope-from ) id 1iLmef-0006rT-EH; Sat, 19 Oct 2019 13:12:17 +0200 Date: Sat, 19 Oct 2019 12:12:15 +0100 Message-ID: <86eez9yoog.wl-maz@kernel.org> From: Marc Zyngier To: Steven Price Subject: Re: [PATCH v6 05/10] KVM: arm64: Support stolen time reporting via shared structure In-Reply-To: <20191011125930.40834-6-steven.price@arm.com> References: <20191011125930.40834-1-steven.price@arm.com> <20191011125930.40834-6-steven.price@arm.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/26 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.104.136.17 X-SA-Exim-Rcpt-To: steven.price@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, catalin.marinas@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, mark.rutland@arm.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191019_041222_135528_0AB79629 X-CRM114-Status: GOOD ( 30.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kvm@vger.kernel.org, Radim =?UTF-8?B?S3LEjW3DocWZ?= , Catalin Marinas , Suzuki K Pouloze , linux-doc@vger.kernel.org, Russell King , linux-kernel@vger.kernel.org, James Morse , Julien Thierry , Paolo Bonzini , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, 11 Oct 2019 13:59:25 +0100, Steven Price wrote: > > Implement the service call for configuring a shared structure between a > VCPU and the hypervisor in which the hypervisor can write the time > stolen from the VCPU's execution time by other tasks on the host. > > User space allocates memory which is placed at an IPA also chosen by user > space. The hypervisor then updates the shared structure using > kvm_put_guest() to ensure single copy atomicity of the 64-bit value > reporting the stolen time in nanoseconds. > > Whenever stolen time is enabled by the guest, the stolen time counter is > reset. > > The stolen time itself is retrieved from the sched_info structure > maintained by the Linux scheduler code. We enable SCHEDSTATS when > selecting KVM Kconfig to ensure this value is meaningful. > > Signed-off-by: Steven Price > --- > arch/arm/include/asm/kvm_host.h | 20 +++++++++++ > arch/arm64/include/asm/kvm_host.h | 21 +++++++++++- > arch/arm64/kvm/Kconfig | 1 + > include/linux/kvm_types.h | 2 ++ > virt/kvm/arm/arm.c | 11 ++++++ > virt/kvm/arm/hypercalls.c | 3 ++ > virt/kvm/arm/pvtime.c | 56 +++++++++++++++++++++++++++++++ > 7 files changed, 113 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h > index 5a0c3569ebde..5c401482d62d 100644 > --- a/arch/arm/include/asm/kvm_host.h > +++ b/arch/arm/include/asm/kvm_host.h > @@ -39,6 +39,7 @@ > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) > #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) > +#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > @@ -329,6 +330,25 @@ static inline long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > return SMCCC_RET_NOT_SUPPORTED; > } > > +static inline long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu) > +{ > + return SMCCC_RET_NOT_SUPPORTED; > +} > + > +static inline int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init) > +{ > + return -ENOTSUPP; > +} > + > +static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) > +{ > +} > + > +static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch) > +{ > + return false; > +} > + > void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); > > struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 93b46d9526d0..1697e63f6dd8 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -44,6 +44,7 @@ > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) > #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) > +#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > @@ -338,8 +339,14 @@ struct kvm_vcpu_arch { > /* True when deferrable sysregs are loaded on the physical CPU, > * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ > bool sysregs_loaded_on_cpu; > -}; > > + /* Guest PV state */ > + struct { > + u64 steal; > + u64 last_steal; > + gpa_t base; > + } steal; > +}; nit: Please keep an empty line at the end of the structure. > /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ > #define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ > sve_ffr_offset((vcpu)->arch.sve_max_vl))) > @@ -479,6 +486,18 @@ int kvm_perf_init(void); > int kvm_perf_teardown(void); > > long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu); > +long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu); > +int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init); > + > +static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) > +{ > + vcpu_arch->steal.base = GPA_INVALID; > +} > + > +static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch) > +{ > + return (vcpu_arch->steal.base != GPA_INVALID); > +} > > void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); > > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig > index a67121d419a2..d8b88e40d223 100644 > --- a/arch/arm64/kvm/Kconfig > +++ b/arch/arm64/kvm/Kconfig > @@ -39,6 +39,7 @@ config KVM > select IRQ_BYPASS_MANAGER > select HAVE_KVM_IRQ_BYPASS > select HAVE_KVM_VCPU_RUN_PID_CHANGE > + select SCHEDSTATS > ---help--- > Support hosting virtualized guest machines. > We don't support KVM with 16K page tables yet, due to the multiple > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h > index bde5374ae021..1c88e69db3d9 100644 > --- a/include/linux/kvm_types.h > +++ b/include/linux/kvm_types.h > @@ -35,6 +35,8 @@ typedef unsigned long gva_t; > typedef u64 gpa_t; > typedef u64 gfn_t; > > +#define GPA_INVALID (~(gpa_t)0) > + > typedef unsigned long hva_t; > typedef u64 hpa_t; > typedef u64 hfn_t; > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c > index 86c6aa1cb58e..5d3059aeadb1 100644 > --- a/virt/kvm/arm/arm.c > +++ b/virt/kvm/arm/arm.c > @@ -40,6 +40,10 @@ > #include > #include > > +#include > +#include > +#include > + > #ifdef REQUIRES_VIRT > __asm__(".arch_extension virt"); > #endif > @@ -351,6 +355,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) > > kvm_arm_reset_debug_ptr(vcpu); > > + kvm_arm_pvtime_vcpu_init(&vcpu->arch); > + > return kvm_vgic_vcpu_init(vcpu); > } > > @@ -380,6 +386,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > kvm_vcpu_load_sysregs(vcpu); > kvm_arch_vcpu_load_fp(vcpu); > kvm_vcpu_pmu_restore_guest(vcpu); > + if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) > + kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); > > if (single_task_running()) > vcpu_clear_wfe_traps(vcpu); > @@ -645,6 +653,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) > * that a VCPU sees new virtual interrupts. > */ > kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu); > + > + if (kvm_check_request(KVM_REQ_RECORD_STEAL, vcpu)) > + kvm_update_stolen_time(vcpu, false); > } > } > > diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c > index 97ea8b133e77..5c333a64390e 100644 > --- a/virt/kvm/arm/hypercalls.c > +++ b/virt/kvm/arm/hypercalls.c > @@ -56,6 +56,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) > case ARM_SMCCC_HV_PV_TIME_FEATURES: > val = kvm_hypercall_pv_features(vcpu); > break; > + case ARM_SMCCC_HV_PV_TIME_ST: > + val = kvm_hypercall_stolen_time(vcpu); > + break; > default: > return kvm_psci_call(vcpu); > } > diff --git a/virt/kvm/arm/pvtime.c b/virt/kvm/arm/pvtime.c > index 8d0fad671dcf..a90f1b4ebd13 100644 > --- a/virt/kvm/arm/pvtime.c > +++ b/virt/kvm/arm/pvtime.c > @@ -3,8 +3,45 @@ > > #include > > +#include > + > #include > > +int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init) > +{ > + struct kvm *kvm = vcpu->kvm; > + u64 steal; > + u64 steal_le; This should be __le64. > + u64 offset; > + int idx; > + u64 base = vcpu->arch.steal.base; > + > + if (base == GPA_INVALID) > + return -ENOTSUPP; > + > + /* Let's do the local bookkeeping */ > + steal = vcpu->arch.steal.steal; > + steal += current->sched_info.run_delay - vcpu->arch.steal.last_steal; > + vcpu->arch.steal.last_steal = current->sched_info.run_delay; > + vcpu->arch.steal.steal = steal; > + > + steal_le = cpu_to_le64(steal); > + idx = srcu_read_lock(&kvm->srcu); > + if (init) { > + struct pvclock_vcpu_stolen_time init_values = { > + .revision = 0, > + .attributes = 0 nit: 0 is the default initialiser. > + }; > + kvm_write_guest(kvm, base, &init_values, > + sizeof(init_values)); > + } I'm not convinced by this init phase right in the middle of the normal path. It looks ugly, and it'd be better if moved out of line. I'd suggest: static void kvm_init_stolen_time(struct kvm_vcpu *vcpu) { struct pvclock_vcpu_stolen_time init_values = { }; vcpu->arch.steal.steal = 0; vcpu->arch.steal.last_steal = current->sched_info.run_delay; idx = srcu_read_lock(&kvm->srcu); kvm_write_guest(kvm, base, &init_values, sizeof(init_values)); srcu_read_unlock(&kvm->srcu, idx); } and change the two callers accordingly. Or even better, move this code to the hypercall handling function, because that's where it actually belongs. > + offset = offsetof(struct pvclock_vcpu_stolen_time, stolen_time); > + kvm_put_guest(kvm, base + offset, steal_le, u64); > + srcu_read_unlock(&kvm->srcu, idx); > + > + return 0; > +} > + > long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > { > u32 feature = smccc_get_arg1(vcpu); > @@ -12,6 +49,7 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > > switch (feature) { > case ARM_SMCCC_HV_PV_TIME_FEATURES: > + case ARM_SMCCC_HV_PV_TIME_ST: > val = SMCCC_RET_SUCCESS; > break; > } > @@ -19,3 +57,21 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) > return val; > } > > +long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu) Why long? If that's a base address, then it is either a phys_addr_t or a gpa_t. I'd suggest you move the error check to the caller. > +{ > + int err; > + > + /* > + * Start counting stolen time from the time the guest requests > + * the feature enabled. > + */ > + vcpu->arch.steal.steal = 0; > + vcpu->arch.steal.last_steal = current->sched_info.run_delay; > + > + err = kvm_update_stolen_time(vcpu, true); > + > + if (err) > + return SMCCC_RET_NOT_SUPPORTED; > + > + return vcpu->arch.steal.base; > +} > -- > 2.20.1 > > Thanks, M. -- Jazz is not dead, it just smells funny. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel