From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mihai Caraman Subject: [PATCH v4 1/6] KVM: PPC: Book3E: Increase FPU laziness Date: Wed, 20 Aug 2014 16:36:22 +0300 Message-ID: <1408541787-24625-2-git-send-email-mihai.caraman@freescale.com> References: <1408541787-24625-1-git-send-email-mihai.caraman@freescale.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Cc: Mihai Caraman , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org To: Return-path: In-Reply-To: <1408541787-24625-1-git-send-email-mihai.caraman@freescale.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+glppe-linuxppc-embedded-2=m.gmane.org@lists.ozlabs.org Sender: "Linuxppc-dev" List-Id: kvm.vger.kernel.org SW5jcmVhc2UgRlBVIGxhemluZXNzIGJ5IGxvYWRpbmcgdGhlIGd1ZXN0IHN0YXRlIGludG8gdGhl IHVuaXQgYmVmb3JlIGVudGVyaW5nCnRoZSBndWVzdCBpbnN0ZWFkIG9mIGRvaW5nIGl0IG9uIGVh Y2ggdmNwdSBzY2hlZHVsZS4gV2l0aG91dCB0aGlzIGltcHJvdmVtZW50CmFuIGludGVycnVwdCBt YXkgY2xhaW0gZmxvYXRpbmcgcG9pbnQgY29ycnVwdGluZyBndWVzdCBzdGF0ZS4KClNpZ25lZC1v ZmYtYnk6IE1paGFpIENhcmFtYW4gPG1paGFpLmNhcmFtYW5AZnJlZXNjYWxlLmNvbT4KLS0tCnY0 OgogLSB1cGRhdGUgY29tbWl0IG1lc3NhZ2UKCnYzOgogLSBubyBjaGFuZ2VzCgp2MjoKIC0gcmVt b3ZlIGZwdV9hY3RpdmUKIC0gYWRkIGRlc2NyaXB0aXZlIGNvbW1lbnRzCgogYXJjaC9wb3dlcnBj L2t2bS9ib29rZS5jICB8IDQzICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t LS0tLS0KIGFyY2gvcG93ZXJwYy9rdm0vYm9va2UuaCAgfCAzNCAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tCiBhcmNoL3Bvd2VycGMva3ZtL2U1MDBtYy5jIHwgIDIgLS0KIDMgZmls ZXMgY2hhbmdlZCwgMzYgaW5zZXJ0aW9ucygrKSwgNDMgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0 IGEvYXJjaC9wb3dlcnBjL2t2bS9ib29rZS5jIGIvYXJjaC9wb3dlcnBjL2t2bS9ib29rZS5jCmlu ZGV4IDA3NGI3ZmMuLjkxZTcyMTcgMTAwNjQ0Ci0tLSBhL2FyY2gvcG93ZXJwYy9rdm0vYm9va2Uu YworKysgYi9hcmNoL3Bvd2VycGMva3ZtL2Jvb2tlLmMKQEAgLTEyNCw2ICsxMjQsNDAgQEAgc3Rh dGljIHZvaWQga3ZtcHBjX3ZjcHVfc3luY19zcGUoc3RydWN0IGt2bV92Y3B1ICp2Y3B1KQogfQog I2VuZGlmCiAKKy8qCisgKiBMb2FkIHVwIGd1ZXN0IHZjcHUgRlAgc3RhdGUgaWYgaXQncyBuZWVk ZWQuCisgKiBJdCBhbHNvIHNldCB0aGUgTVNSX0ZQIGluIHRocmVhZCBzbyB0aGF0IGhvc3Qga25v dworICogd2UncmUgaG9sZGluZyBGUFUsIGFuZCB0aGVuIGhvc3QgY2FuIGhlbHAgdG8gc2F2ZQor ICogZ3Vlc3QgdmNwdSBGUCBzdGF0ZSBpZiBvdGhlciB0aHJlYWRzIHJlcXVpcmUgdG8gdXNlIEZQ VS4KKyAqIFRoaXMgc2ltdWxhdGVzIGFuIEZQIHVuYXZhaWxhYmxlIGZhdWx0LgorICoKKyAqIEl0 IHJlcXVpcmVzIHRvIGJlIGNhbGxlZCB3aXRoIHByZWVtcHRpb24gZGlzYWJsZWQuCisgKi8KK3N0 YXRpYyBpbmxpbmUgdm9pZCBrdm1wcGNfbG9hZF9ndWVzdF9mcChzdHJ1Y3Qga3ZtX3ZjcHUgKnZj cHUpCit7CisjaWZkZWYgQ09ORklHX1BQQ19GUFUKKwlpZiAoIShjdXJyZW50LT50aHJlYWQucmVn cy0+bXNyICYgTVNSX0ZQKSkgeworCQllbmFibGVfa2VybmVsX2ZwKCk7CisJCWxvYWRfZnBfc3Rh dGUoJnZjcHUtPmFyY2guZnApOworCQljdXJyZW50LT50aHJlYWQuZnBfc2F2ZV9hcmVhID0gJnZj cHUtPmFyY2guZnA7CisJCWN1cnJlbnQtPnRocmVhZC5yZWdzLT5tc3IgfD0gTVNSX0ZQOworCX0K KyNlbmRpZgorfQorCisvKgorICogU2F2ZSBndWVzdCB2Y3B1IEZQIHN0YXRlIGludG8gdGhyZWFk LgorICogSXQgcmVxdWlyZXMgdG8gYmUgY2FsbGVkIHdpdGggcHJlZW1wdGlvbiBkaXNhYmxlZC4K KyAqLworc3RhdGljIGlubGluZSB2b2lkIGt2bXBwY19zYXZlX2d1ZXN0X2ZwKHN0cnVjdCBrdm1f dmNwdSAqdmNwdSkKK3sKKyNpZmRlZiBDT05GSUdfUFBDX0ZQVQorCWlmIChjdXJyZW50LT50aHJl YWQucmVncy0+bXNyICYgTVNSX0ZQKQorCQlnaXZldXBfZnB1KGN1cnJlbnQpOworCWN1cnJlbnQt PnRocmVhZC5mcF9zYXZlX2FyZWEgPSBOVUxMOworI2VuZGlmCit9CisKIHN0YXRpYyB2b2lkIGt2 bXBwY192Y3B1X3N5bmNfZnB1KHN0cnVjdCBrdm1fdmNwdSAqdmNwdSkKIHsKICNpZiBkZWZpbmVk KENPTkZJR19QUENfRlBVKSAmJiAhZGVmaW5lZChDT05GSUdfS1ZNX0JPT0tFX0hWKQpAQCAtNjU4 LDEyICs2OTIsOCBAQCBpbnQga3ZtcHBjX3ZjcHVfcnVuKHN0cnVjdCBrdm1fcnVuICprdm1fcnVu LCBzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCiAKIAkvKgogCSAqIFNpbmNlIHdlIGNhbid0IHRyYXAg b24gTVNSX0ZQIGluIEdTLW1vZGUsIHdlIGNvbnNpZGVyIHRoZSBndWVzdAotCSAqIGFzIGFsd2F5 cyB1c2luZyB0aGUgRlBVLiAgS2VybmVsIHVzYWdlIG9mIEZQICh2aWEKLQkgKiBlbmFibGVfa2Vy bmVsX2ZwKCkpIGluIHRoaXMgdGhyZWFkIG11c3Qgbm90IG9jY3VyIHdoaWxlCi0JICogdmNwdS0+ ZnB1X2FjdGl2ZSBpcyBzZXQuCisJICogYXMgYWx3YXlzIHVzaW5nIHRoZSBGUFUuCiAJICovCi0J dmNwdS0+ZnB1X2FjdGl2ZSA9IDE7Ci0KIAlrdm1wcGNfbG9hZF9ndWVzdF9mcCh2Y3B1KTsKICNl bmRpZgogCkBAIC02ODcsOCArNzE3LDYgQEAgaW50IGt2bXBwY192Y3B1X3J1bihzdHJ1Y3Qga3Zt X3J1biAqa3ZtX3J1biwgc3RydWN0IGt2bV92Y3B1ICp2Y3B1KQogCiAjaWZkZWYgQ09ORklHX1BQ Q19GUFUKIAlrdm1wcGNfc2F2ZV9ndWVzdF9mcCh2Y3B1KTsKLQotCXZjcHUtPmZwdV9hY3RpdmUg PSAwOwogI2VuZGlmCiAKIG91dDoKQEAgLTExOTQsNiArMTIyMiw3IEBAIG91dDoKIAkJZWxzZSB7 CiAJCQkvKiBpbnRlcnJ1cHRzIG5vdyBoYXJkLWRpc2FibGVkICovCiAJCQlrdm1wcGNfZml4X2Vl X2JlZm9yZV9lbnRyeSgpOworCQkJa3ZtcHBjX2xvYWRfZ3Vlc3RfZnAodmNwdSk7CiAJCX0KIAl9 CiAKZGlmZiAtLWdpdCBhL2FyY2gvcG93ZXJwYy9rdm0vYm9va2UuaCBiL2FyY2gvcG93ZXJwYy9r dm0vYm9va2UuaAppbmRleCBmNzUzNTQzLi5lNzNkNTEzIDEwMDY0NAotLS0gYS9hcmNoL3Bvd2Vy cGMva3ZtL2Jvb2tlLmgKKysrIGIvYXJjaC9wb3dlcnBjL2t2bS9ib29rZS5oCkBAIC0xMTYsNDAg KzExNiw2IEBAIGV4dGVybiBpbnQga3ZtcHBjX2NvcmVfZW11bGF0ZV9tdHNwcl9lNTAwKHN0cnVj dCBrdm1fdmNwdSAqdmNwdSwgaW50IHNwcm4sCiBleHRlcm4gaW50IGt2bXBwY19jb3JlX2VtdWxh dGVfbWZzcHJfZTUwMChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUsIGludCBzcHJuLAogCQkJCQkgIHVs b25nICpzcHJfdmFsKTsKIAotLyoKLSAqIExvYWQgdXAgZ3Vlc3QgdmNwdSBGUCBzdGF0ZSBpZiBp dCdzIG5lZWRlZC4KLSAqIEl0IGFsc28gc2V0IHRoZSBNU1JfRlAgaW4gdGhyZWFkIHNvIHRoYXQg aG9zdCBrbm93Ci0gKiB3ZSdyZSBob2xkaW5nIEZQVSwgYW5kIHRoZW4gaG9zdCBjYW4gaGVscCB0 byBzYXZlCi0gKiBndWVzdCB2Y3B1IEZQIHN0YXRlIGlmIG90aGVyIHRocmVhZHMgcmVxdWlyZSB0 byB1c2UgRlBVLgotICogVGhpcyBzaW11bGF0ZXMgYW4gRlAgdW5hdmFpbGFibGUgZmF1bHQuCi0g KgotICogSXQgcmVxdWlyZXMgdG8gYmUgY2FsbGVkIHdpdGggcHJlZW1wdGlvbiBkaXNhYmxlZC4K LSAqLwotc3RhdGljIGlubGluZSB2b2lkIGt2bXBwY19sb2FkX2d1ZXN0X2ZwKHN0cnVjdCBrdm1f dmNwdSAqdmNwdSkKLXsKLSNpZmRlZiBDT05GSUdfUFBDX0ZQVQotCWlmICh2Y3B1LT5mcHVfYWN0 aXZlICYmICEoY3VycmVudC0+dGhyZWFkLnJlZ3MtPm1zciAmIE1TUl9GUCkpIHsKLQkJZW5hYmxl X2tlcm5lbF9mcCgpOwotCQlsb2FkX2ZwX3N0YXRlKCZ2Y3B1LT5hcmNoLmZwKTsKLQkJY3VycmVu dC0+dGhyZWFkLmZwX3NhdmVfYXJlYSA9ICZ2Y3B1LT5hcmNoLmZwOwotCQljdXJyZW50LT50aHJl YWQucmVncy0+bXNyIHw9IE1TUl9GUDsKLQl9Ci0jZW5kaWYKLX0KLQotLyoKLSAqIFNhdmUgZ3Vl c3QgdmNwdSBGUCBzdGF0ZSBpbnRvIHRocmVhZC4KLSAqIEl0IHJlcXVpcmVzIHRvIGJlIGNhbGxl ZCB3aXRoIHByZWVtcHRpb24gZGlzYWJsZWQuCi0gKi8KLXN0YXRpYyBpbmxpbmUgdm9pZCBrdm1w cGNfc2F2ZV9ndWVzdF9mcChzdHJ1Y3Qga3ZtX3ZjcHUgKnZjcHUpCi17Ci0jaWZkZWYgQ09ORklH X1BQQ19GUFUKLQlpZiAodmNwdS0+ZnB1X2FjdGl2ZSAmJiAoY3VycmVudC0+dGhyZWFkLnJlZ3Mt Pm1zciAmIE1TUl9GUCkpCi0JCWdpdmV1cF9mcHUoY3VycmVudCk7Ci0JY3VycmVudC0+dGhyZWFk LmZwX3NhdmVfYXJlYSA9IE5VTEw7Ci0jZW5kaWYKLX0KLQogc3RhdGljIGlubGluZSB2b2lkIGt2 bXBwY19jbGVhcl9kYnNyKHZvaWQpCiB7CiAJbXRzcHIoU1BSTl9EQlNSLCBtZnNwcihTUFJOX0RC U1IpKTsKZGlmZiAtLWdpdCBhL2FyY2gvcG93ZXJwYy9rdm0vZTUwMG1jLmMgYi9hcmNoL3Bvd2Vy cGMva3ZtL2U1MDBtYy5jCmluZGV4IDAwMGNmODIuLjQ1NDkzNDkgMTAwNjQ0Ci0tLSBhL2FyY2gv cG93ZXJwYy9rdm0vZTUwMG1jLmMKKysrIGIvYXJjaC9wb3dlcnBjL2t2bS9lNTAwbWMuYwpAQCAt MTQ1LDggKzE0NSw2IEBAIHN0YXRpYyB2b2lkIGt2bXBwY19jb3JlX3ZjcHVfbG9hZF9lNTAwbWMo c3RydWN0IGt2bV92Y3B1ICp2Y3B1LCBpbnQgY3B1KQogCQlrdm1wcGNfZTUwMF90bGJpbF9hbGwo dmNwdV9lNTAwKTsKIAkJX19nZXRfY3B1X3ZhcihsYXN0X3ZjcHVfb2ZfbHBpZClbdmNwdS0+a3Zt LT5hcmNoLmxwaWRdID0gdmNwdTsKIAl9Ci0KLQlrdm1wcGNfbG9hZF9ndWVzdF9mcCh2Y3B1KTsK IH0KIAogc3RhdGljIHZvaWQga3ZtcHBjX2NvcmVfdmNwdV9wdXRfZTUwMG1jKHN0cnVjdCBrdm1f dmNwdSAqdmNwdSkKLS0gCjEuNy4xMS43CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fXwpMaW51eHBwYy1kZXYgbWFpbGluZyBsaXN0CkxpbnV4cHBjLWRldkBs aXN0cy5vemxhYnMub3JnCmh0dHBzOi8vbGlzdHMub3psYWJzLm9yZy9saXN0aW5mby9saW51eHBw Yy1kZXY= From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-by2-obe.outbound.protection.outlook.com (mail-by2lp0244.outbound.protection.outlook.com [207.46.163.244]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 705761A0162 for ; Wed, 20 Aug 2014 23:36:46 +1000 (EST) From: Mihai Caraman To: Subject: [PATCH v4 1/6] KVM: PPC: Book3E: Increase FPU laziness Date: Wed, 20 Aug 2014 16:36:22 +0300 Message-ID: <1408541787-24625-2-git-send-email-mihai.caraman@freescale.com> In-Reply-To: <1408541787-24625-1-git-send-email-mihai.caraman@freescale.com> References: <1408541787-24625-1-git-send-email-mihai.caraman@freescale.com> MIME-Version: 1.0 Content-Type: text/plain Cc: Mihai Caraman , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Increase FPU laziness by loading the guest state into the unit before entering the guest instead of doing it on each vcpu schedule. Without this improvement an interrupt may claim floating point corrupting guest state. Signed-off-by: Mihai Caraman --- v4: - update commit message v3: - no changes v2: - remove fpu_active - add descriptive comments arch/powerpc/kvm/booke.c | 43 ++++++++++++++++++++++++++++++++++++------- arch/powerpc/kvm/booke.h | 34 ---------------------------------- arch/powerpc/kvm/e500mc.c | 2 -- 3 files changed, 36 insertions(+), 43 deletions(-) diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 074b7fc..91e7217 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -124,6 +124,40 @@ static void kvmppc_vcpu_sync_spe(struct kvm_vcpu *vcpu) } #endif +/* + * Load up guest vcpu FP state if it's needed. + * It also set the MSR_FP in thread so that host know + * we're holding FPU, and then host can help to save + * guest vcpu FP state if other threads require to use FPU. + * This simulates an FP unavailable fault. + * + * It requires to be called with preemption disabled. + */ +static inline void kvmppc_load_guest_fp(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_PPC_FPU + if (!(current->thread.regs->msr & MSR_FP)) { + enable_kernel_fp(); + load_fp_state(&vcpu->arch.fp); + current->thread.fp_save_area = &vcpu->arch.fp; + current->thread.regs->msr |= MSR_FP; + } +#endif +} + +/* + * Save guest vcpu FP state into thread. + * It requires to be called with preemption disabled. + */ +static inline void kvmppc_save_guest_fp(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_PPC_FPU + if (current->thread.regs->msr & MSR_FP) + giveup_fpu(current); + current->thread.fp_save_area = NULL; +#endif +} + static void kvmppc_vcpu_sync_fpu(struct kvm_vcpu *vcpu) { #if defined(CONFIG_PPC_FPU) && !defined(CONFIG_KVM_BOOKE_HV) @@ -658,12 +692,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) /* * Since we can't trap on MSR_FP in GS-mode, we consider the guest - * as always using the FPU. Kernel usage of FP (via - * enable_kernel_fp()) in this thread must not occur while - * vcpu->fpu_active is set. + * as always using the FPU. */ - vcpu->fpu_active = 1; - kvmppc_load_guest_fp(vcpu); #endif @@ -687,8 +717,6 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) #ifdef CONFIG_PPC_FPU kvmppc_save_guest_fp(vcpu); - - vcpu->fpu_active = 0; #endif out: @@ -1194,6 +1222,7 @@ out: else { /* interrupts now hard-disabled */ kvmppc_fix_ee_before_entry(); + kvmppc_load_guest_fp(vcpu); } } diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h index f753543..e73d513 100644 --- a/arch/powerpc/kvm/booke.h +++ b/arch/powerpc/kvm/booke.h @@ -116,40 +116,6 @@ extern int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn, extern int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val); -/* - * Load up guest vcpu FP state if it's needed. - * It also set the MSR_FP in thread so that host know - * we're holding FPU, and then host can help to save - * guest vcpu FP state if other threads require to use FPU. - * This simulates an FP unavailable fault. - * - * It requires to be called with preemption disabled. - */ -static inline void kvmppc_load_guest_fp(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_PPC_FPU - if (vcpu->fpu_active && !(current->thread.regs->msr & MSR_FP)) { - enable_kernel_fp(); - load_fp_state(&vcpu->arch.fp); - current->thread.fp_save_area = &vcpu->arch.fp; - current->thread.regs->msr |= MSR_FP; - } -#endif -} - -/* - * Save guest vcpu FP state into thread. - * It requires to be called with preemption disabled. - */ -static inline void kvmppc_save_guest_fp(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_PPC_FPU - if (vcpu->fpu_active && (current->thread.regs->msr & MSR_FP)) - giveup_fpu(current); - current->thread.fp_save_area = NULL; -#endif -} - static inline void kvmppc_clear_dbsr(void) { mtspr(SPRN_DBSR, mfspr(SPRN_DBSR)); diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c index 000cf82..4549349 100644 --- a/arch/powerpc/kvm/e500mc.c +++ b/arch/powerpc/kvm/e500mc.c @@ -145,8 +145,6 @@ static void kvmppc_core_vcpu_load_e500mc(struct kvm_vcpu *vcpu, int cpu) kvmppc_e500_tlbil_all(vcpu_e500); __get_cpu_var(last_vcpu_of_lpid)[vcpu->kvm->arch.lpid] = vcpu; } - - kvmppc_load_guest_fp(vcpu); } static void kvmppc_core_vcpu_put_e500mc(struct kvm_vcpu *vcpu) -- 1.7.11.7 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mihai Caraman Date: Wed, 20 Aug 2014 13:36:22 +0000 Subject: [PATCH v4 1/6] KVM: PPC: Book3E: Increase FPU laziness Message-Id: <1408541787-24625-2-git-send-email-mihai.caraman@freescale.com> List-Id: References: <1408541787-24625-1-git-send-email-mihai.caraman@freescale.com> In-Reply-To: <1408541787-24625-1-git-send-email-mihai.caraman@freescale.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: kvm-ppc@vger.kernel.org Cc: Mihai Caraman , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org Increase FPU laziness by loading the guest state into the unit before entering the guest instead of doing it on each vcpu schedule. Without this improvement an interrupt may claim floating point corrupting guest state. Signed-off-by: Mihai Caraman --- v4: - update commit message v3: - no changes v2: - remove fpu_active - add descriptive comments arch/powerpc/kvm/booke.c | 43 ++++++++++++++++++++++++++++++++++++------- arch/powerpc/kvm/booke.h | 34 ---------------------------------- arch/powerpc/kvm/e500mc.c | 2 -- 3 files changed, 36 insertions(+), 43 deletions(-) diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 074b7fc..91e7217 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -124,6 +124,40 @@ static void kvmppc_vcpu_sync_spe(struct kvm_vcpu *vcpu) } #endif +/* + * Load up guest vcpu FP state if it's needed. + * It also set the MSR_FP in thread so that host know + * we're holding FPU, and then host can help to save + * guest vcpu FP state if other threads require to use FPU. + * This simulates an FP unavailable fault. + * + * It requires to be called with preemption disabled. + */ +static inline void kvmppc_load_guest_fp(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_PPC_FPU + if (!(current->thread.regs->msr & MSR_FP)) { + enable_kernel_fp(); + load_fp_state(&vcpu->arch.fp); + current->thread.fp_save_area = &vcpu->arch.fp; + current->thread.regs->msr |= MSR_FP; + } +#endif +} + +/* + * Save guest vcpu FP state into thread. + * It requires to be called with preemption disabled. + */ +static inline void kvmppc_save_guest_fp(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_PPC_FPU + if (current->thread.regs->msr & MSR_FP) + giveup_fpu(current); + current->thread.fp_save_area = NULL; +#endif +} + static void kvmppc_vcpu_sync_fpu(struct kvm_vcpu *vcpu) { #if defined(CONFIG_PPC_FPU) && !defined(CONFIG_KVM_BOOKE_HV) @@ -658,12 +692,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) /* * Since we can't trap on MSR_FP in GS-mode, we consider the guest - * as always using the FPU. Kernel usage of FP (via - * enable_kernel_fp()) in this thread must not occur while - * vcpu->fpu_active is set. + * as always using the FPU. */ - vcpu->fpu_active = 1; - kvmppc_load_guest_fp(vcpu); #endif @@ -687,8 +717,6 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) #ifdef CONFIG_PPC_FPU kvmppc_save_guest_fp(vcpu); - - vcpu->fpu_active = 0; #endif out: @@ -1194,6 +1222,7 @@ out: else { /* interrupts now hard-disabled */ kvmppc_fix_ee_before_entry(); + kvmppc_load_guest_fp(vcpu); } } diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h index f753543..e73d513 100644 --- a/arch/powerpc/kvm/booke.h +++ b/arch/powerpc/kvm/booke.h @@ -116,40 +116,6 @@ extern int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn, extern int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val); -/* - * Load up guest vcpu FP state if it's needed. - * It also set the MSR_FP in thread so that host know - * we're holding FPU, and then host can help to save - * guest vcpu FP state if other threads require to use FPU. - * This simulates an FP unavailable fault. - * - * It requires to be called with preemption disabled. - */ -static inline void kvmppc_load_guest_fp(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_PPC_FPU - if (vcpu->fpu_active && !(current->thread.regs->msr & MSR_FP)) { - enable_kernel_fp(); - load_fp_state(&vcpu->arch.fp); - current->thread.fp_save_area = &vcpu->arch.fp; - current->thread.regs->msr |= MSR_FP; - } -#endif -} - -/* - * Save guest vcpu FP state into thread. - * It requires to be called with preemption disabled. - */ -static inline void kvmppc_save_guest_fp(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_PPC_FPU - if (vcpu->fpu_active && (current->thread.regs->msr & MSR_FP)) - giveup_fpu(current); - current->thread.fp_save_area = NULL; -#endif -} - static inline void kvmppc_clear_dbsr(void) { mtspr(SPRN_DBSR, mfspr(SPRN_DBSR)); diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c index 000cf82..4549349 100644 --- a/arch/powerpc/kvm/e500mc.c +++ b/arch/powerpc/kvm/e500mc.c @@ -145,8 +145,6 @@ static void kvmppc_core_vcpu_load_e500mc(struct kvm_vcpu *vcpu, int cpu) kvmppc_e500_tlbil_all(vcpu_e500); __get_cpu_var(last_vcpu_of_lpid)[vcpu->kvm->arch.lpid] = vcpu; } - - kvmppc_load_guest_fp(vcpu); } static void kvmppc_core_vcpu_put_e500mc(struct kvm_vcpu *vcpu) -- 1.7.11.7