From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968187AbeE2WSJ (ORCPT ); Tue, 29 May 2018 18:18:09 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:35036 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S968083AbeE2WRk (ORCPT ); Tue, 29 May 2018 18:17:40 -0400 X-Google-Smtp-Source: ADUXVKIoSh1i1WIZkORRpN/RzayD9GIHV8SOmq7PkftI6+8lNqi0/8DIwNL9GZ/Tem8b8CtkhIVpXg== From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: Thomas Garnier , Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 19/27] kvm: Adapt assembly for PIE support Date: Tue, 29 May 2018 15:15:20 -0700 Message-Id: <20180529221625.33541-20-thgarnie@google.com> X-Mailer: git-send-email 2.17.0.921.gf22659ad46-goog In-Reply-To: <20180529221625.33541-1-thgarnie@google.com> References: <20180529221625.33541-1-thgarnie@google.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. The new __ASM_MOVABS macro is used to get the address of a symbol on both 32 and 64-bit with PIE support. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/kvm_host.h | 8 ++++++-- arch/x86/kernel/kvm.c | 6 ++++-- arch/x86/kvm/svm.c | 4 ++-- 3 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 130874077c93..6afb2161263d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1389,9 +1389,13 @@ asmlinkage void kvm_spurious_fault(void); ".pushsection .fixup, \"ax\" \n" \ "667: \n\t" \ cleanup_insn "\n\t" \ - "cmpb $0, kvm_rebooting \n\t" \ + "cmpb $0, kvm_rebooting" __ASM_SEL(,(%%rip)) " \n\t" \ "jne 668b \n\t" \ - __ASM_SIZE(push) " $666b \n\t" \ + __ASM_SIZE(push) "$0 \n\t" \ + __ASM_SIZE(push) "%%" _ASM_AX " \n\t" \ + _ASM_MOVABS " $666b, %%" _ASM_AX "\n\t" \ + _ASM_MOV " %%" _ASM_AX ", " __ASM_SEL(4,8) "(%%" _ASM_SP ") \n\t" \ + __ASM_SIZE(pop) "%%" _ASM_AX " \n\t" \ "call kvm_spurious_fault \n\t" \ ".popsection \n\t" \ _ASM_EXTABLE(666b, 667b) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 5b2300b818af..38716c409a98 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -726,8 +726,10 @@ asm( ".global __raw_callee_save___kvm_vcpu_is_preempted;" ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" "__raw_callee_save___kvm_vcpu_is_preempted:" -"movq __per_cpu_offset(,%rdi,8), %rax;" -"cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);" +"leaq __per_cpu_offset(%rip), %rax;" +"movq (%rax,%rdi,8), %rax;" +"addq " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rip), %rax;" +"cmpb $0, (%rax);" "setne %al;" "ret;" ".popsection"); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index b2e7140f23ea..bf09d1993d8d 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -707,12 +707,12 @@ static u32 svm_msrpm_offset(u32 msr) static inline void clgi(void) { - asm volatile (__ex(SVM_CLGI)); + asm volatile (__ex(SVM_CLGI) : :); } static inline void stgi(void) { - asm volatile (__ex(SVM_STGI)); + asm volatile (__ex(SVM_STGI) : :); } static inline void invlpga(unsigned long addr, u32 asid) -- 2.17.0.921.gf22659ad46-goog