From: Peter Zijlstra <peterz@infradead.org> To: Waiman Long <longman@redhat.com> Cc: "Jeremy Fitzhardinge" <jeremy@goop.org>, "Chris Wright" <chrisw@sous-sol.org>, "Alok Kataria" <akataria@vmware.com>, "Rusty Russell" <rusty@rustcorp.com.au>, "Ingo Molnar" <mingo@redhat.com>, "Thomas Gleixner" <tglx@linutronix.de>, "H. Peter Anvin" <hpa@zytor.com>, linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Pan Xinhui" <xinhui.pan@linux.vnet.ibm.com>, "Paolo Bonzini" <pbonzini@redhat.com>, "Radim Krčmář" <rkrcmar@redhat.com>, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>, "Juergen Gross" <jgross@suse.com> Subject: Re: [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function Date: Tue, 14 Feb 2017 17:03:06 +0100 [thread overview] Message-ID: <20170214160306.GP6500@twins.programming.kicks-ass.net> (raw) In-Reply-To: <fc65322b-902e-6589-f64c-62be5abdb093@redhat.com> On Tue, Feb 14, 2017 at 09:46:17AM -0500, Waiman Long wrote: > On 02/14/2017 04:39 AM, Peter Zijlstra wrote: > > On Mon, Feb 13, 2017 at 05:34:01PM -0500, Waiman Long wrote: > >> It is the address of &steal_time that will exceed the 32-bit limit. > > That seems extremely unlikely. That would mean we have more than 4G > > worth of per-cpu variables declared in the kernel. > > I have some doubt about if the compiler is able to properly use > RIP-relative addressing for this. Its not RIP relative, &steal_time lives in the .data..percpu section and is absolute in that. > Anyway, it seems like constraints > aren't allowed for asm() when not in the function context, at least for > the the compiler that I am using (4.8.5). So it is a moot point. Well kvm_steal_time is (host/guest) ABI anyway, so the offset is fixed and hard-coding it isn't a problem. $ readelf -s defconfig-build/vmlinux | grep steal_time 100843: 0000000000017ac0 64 OBJECT WEAK DEFAULT 35 steal_time $ objdump -dr defconfig-build/vmlinux | awk '/[<][^>]*[>]:/ { o=0 } /[<]__raw_callee_save___kvm_vcpu_is_preempted[>]:/ {o=1} { if (o) print $0 }' ffffffff810b4480 <__raw_callee_save___kvm_vcpu_is_preempted>: ffffffff810b4480: 55 push %rbp ffffffff810b4481: 48 89 e5 mov %rsp,%rbp ffffffff810b4484: 48 8b 04 fd 00 94 46 mov -0x7db96c00(,%rdi,8),%rax ffffffff810b448b: 82 ffffffff810b4488: R_X86_64_32S __per_cpu_offset ffffffff810b448c: 80 b8 d0 7a 01 00 00 cmpb $0x0,0x17ad0(%rax) ffffffff810b448e: R_X86_64_32S steal_time+0x10 ffffffff810b4493: 0f 95 c0 setne %al ffffffff810b4496: 5d pop %rbp ffffffff810b4497: c3 retq And as you'll note, the displacement is correct and 'small'. The below relies on the 'extra' cast in PVOP_CALL_ARG1() to extend the argument to 64bit on the call side of things. --- arch/x86/kernel/kvm.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 099fcba..2c854b8 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val) local_irq_restore(flags); } +#ifdef CONFIG_X86_32 __visible bool __kvm_vcpu_is_preempted(int cpu) { struct kvm_steal_time *src = &per_cpu(steal_time, cpu); @@ -597,6 +598,26 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) } PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); +#else + +extern bool __raw_callee_save___kvm_vcpu_is_preempted(int cpu); + +asm( +".pushsection .text;" +".global __raw_callee_save___kvm_vcpu_is_preempted;" +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" +"__raw_callee_save___kvm_vcpu_is_preempted:" +FRAME_BEGIN +"movq __per_cpu_offset(,%rdi,8), %rax;" +"cmpb $0, 16+steal_time(%rax);" +"setne %al;" +FRAME_END +"ret;" +".popsection" +); + +#endif + /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. */
WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org> To: Waiman Long <longman@redhat.com> Cc: linux-arch@vger.kernel.org, "Juergen Gross" <jgross@suse.com>, "Jeremy Fitzhardinge" <jeremy@goop.org>, x86@kernel.org, kvm@vger.kernel.org, "Radim Krčmář" <rkrcmar@redhat.com>, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>, "Pan Xinhui" <xinhui.pan@linux.vnet.ibm.com>, "Paolo Bonzini" <pbonzini@redhat.com>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, "Chris Wright" <chrisw@sous-sol.org>, "Ingo Molnar" <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org, "Alok Kataria" <akataria@vmware.com>, "Thomas Gleixner" <tglx@linutronix.de> Subject: Re: [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function Date: Tue, 14 Feb 2017 17:03:06 +0100 [thread overview] Message-ID: <20170214160306.GP6500@twins.programming.kicks-ass.net> (raw) In-Reply-To: <fc65322b-902e-6589-f64c-62be5abdb093@redhat.com> On Tue, Feb 14, 2017 at 09:46:17AM -0500, Waiman Long wrote: > On 02/14/2017 04:39 AM, Peter Zijlstra wrote: > > On Mon, Feb 13, 2017 at 05:34:01PM -0500, Waiman Long wrote: > >> It is the address of &steal_time that will exceed the 32-bit limit. > > That seems extremely unlikely. That would mean we have more than 4G > > worth of per-cpu variables declared in the kernel. > > I have some doubt about if the compiler is able to properly use > RIP-relative addressing for this. Its not RIP relative, &steal_time lives in the .data..percpu section and is absolute in that. > Anyway, it seems like constraints > aren't allowed for asm() when not in the function context, at least for > the the compiler that I am using (4.8.5). So it is a moot point. Well kvm_steal_time is (host/guest) ABI anyway, so the offset is fixed and hard-coding it isn't a problem. $ readelf -s defconfig-build/vmlinux | grep steal_time 100843: 0000000000017ac0 64 OBJECT WEAK DEFAULT 35 steal_time $ objdump -dr defconfig-build/vmlinux | awk '/[<][^>]*[>]:/ { o=0 } /[<]__raw_callee_save___kvm_vcpu_is_preempted[>]:/ {o=1} { if (o) print $0 }' ffffffff810b4480 <__raw_callee_save___kvm_vcpu_is_preempted>: ffffffff810b4480: 55 push %rbp ffffffff810b4481: 48 89 e5 mov %rsp,%rbp ffffffff810b4484: 48 8b 04 fd 00 94 46 mov -0x7db96c00(,%rdi,8),%rax ffffffff810b448b: 82 ffffffff810b4488: R_X86_64_32S __per_cpu_offset ffffffff810b448c: 80 b8 d0 7a 01 00 00 cmpb $0x0,0x17ad0(%rax) ffffffff810b448e: R_X86_64_32S steal_time+0x10 ffffffff810b4493: 0f 95 c0 setne %al ffffffff810b4496: 5d pop %rbp ffffffff810b4497: c3 retq And as you'll note, the displacement is correct and 'small'. The below relies on the 'extra' cast in PVOP_CALL_ARG1() to extend the argument to 64bit on the call side of things. --- arch/x86/kernel/kvm.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 099fcba..2c854b8 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val) local_irq_restore(flags); } +#ifdef CONFIG_X86_32 __visible bool __kvm_vcpu_is_preempted(int cpu) { struct kvm_steal_time *src = &per_cpu(steal_time, cpu); @@ -597,6 +598,26 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) } PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); +#else + +extern bool __raw_callee_save___kvm_vcpu_is_preempted(int cpu); + +asm( +".pushsection .text;" +".global __raw_callee_save___kvm_vcpu_is_preempted;" +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" +"__raw_callee_save___kvm_vcpu_is_preempted:" +FRAME_BEGIN +"movq __per_cpu_offset(,%rdi,8), %rax;" +"cmpb $0, 16+steal_time(%rax);" +"setne %al;" +FRAME_END +"ret;" +".popsection" +); + +#endif + /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. */
next prev parent reply other threads:[~2017-02-14 16:03 UTC|newest] Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-02-10 15:43 [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function Waiman Long 2017-02-10 15:43 ` Waiman Long 2017-02-10 16:19 ` Peter Zijlstra 2017-02-10 16:19 ` Peter Zijlstra 2017-02-10 16:19 ` Peter Zijlstra 2017-02-10 16:35 ` Waiman Long 2017-02-10 16:35 ` Waiman Long 2017-02-10 17:00 ` Waiman Long 2017-02-10 17:00 ` Waiman Long 2017-02-10 17:00 ` Waiman Long 2017-02-13 10:47 ` Peter Zijlstra 2017-02-13 10:47 ` Peter Zijlstra 2017-02-13 10:47 ` Peter Zijlstra 2017-02-13 10:53 ` Peter Zijlstra 2017-02-13 10:53 ` Peter Zijlstra 2017-02-13 10:53 ` Peter Zijlstra 2017-02-13 19:42 ` Waiman Long 2017-02-13 19:42 ` Waiman Long 2017-02-13 19:42 ` Waiman Long 2017-02-13 20:12 ` Waiman Long 2017-02-13 20:12 ` Waiman Long 2017-02-13 21:52 ` Peter Zijlstra 2017-02-13 21:52 ` Peter Zijlstra 2017-02-13 21:52 ` Peter Zijlstra 2017-02-13 22:00 ` hpa 2017-02-13 22:00 ` hpa 2017-02-13 22:00 ` hpa 2017-02-13 22:07 ` hpa 2017-02-13 22:07 ` hpa 2017-02-13 22:07 ` hpa 2017-02-13 22:34 ` Waiman Long 2017-02-13 22:34 ` Waiman Long 2017-02-13 22:34 ` Waiman Long 2017-02-13 22:36 ` hpa 2017-02-13 22:36 ` hpa 2017-02-13 22:36 ` hpa 2017-02-14 9:39 ` Peter Zijlstra 2017-02-14 9:39 ` Peter Zijlstra 2017-02-14 9:39 ` Peter Zijlstra 2017-02-14 14:46 ` Waiman Long 2017-02-14 14:46 ` Waiman Long 2017-02-14 14:46 ` Waiman Long 2017-02-14 16:03 ` Peter Zijlstra 2017-02-14 16:03 ` Peter Zijlstra [this message] 2017-02-14 16:03 ` Peter Zijlstra 2017-02-14 16:18 ` [Xen-devel] " Andrew Cooper 2017-02-14 16:18 ` Andrew Cooper 2017-02-14 16:18 ` [Xen-devel] " Andrew Cooper 2017-02-13 20:12 ` Waiman Long 2017-02-13 20:06 ` hpa 2017-02-13 20:06 ` hpa 2017-02-13 21:57 ` Peter Zijlstra 2017-02-13 21:57 ` Peter Zijlstra 2017-02-13 21:57 ` Peter Zijlstra 2017-02-13 22:24 ` Waiman Long 2017-02-13 22:24 ` Waiman Long 2017-02-13 22:31 ` Peter Zijlstra 2017-02-13 22:31 ` Peter Zijlstra 2017-02-13 22:31 ` Peter Zijlstra 2017-02-13 22:24 ` Waiman Long 2017-02-13 20:06 ` hpa 2017-02-13 19:41 ` Waiman Long 2017-02-13 19:41 ` Waiman Long 2017-02-13 19:41 ` Waiman Long 2017-02-10 16:35 ` Waiman Long 2017-02-10 16:22 ` Paolo Bonzini 2017-02-10 16:22 ` Paolo Bonzini 2017-02-10 16:22 ` Paolo Bonzini -- strict thread matches above, loose matches on Subject: below -- 2017-02-10 15:43 Waiman Long
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170214160306.GP6500@twins.programming.kicks-ass.net \ --to=peterz@infradead.org \ --cc=akataria@vmware.com \ --cc=boris.ostrovsky@oracle.com \ --cc=chrisw@sous-sol.org \ --cc=hpa@zytor.com \ --cc=jeremy@goop.org \ --cc=jgross@suse.com \ --cc=kvm@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=longman@redhat.com \ --cc=mingo@redhat.com \ --cc=pbonzini@redhat.com \ --cc=rkrcmar@redhat.com \ --cc=rusty@rustcorp.com.au \ --cc=tglx@linutronix.de \ --cc=virtualization@lists.linux-foundation.org \ --cc=x86@kernel.org \ --cc=xen-devel@lists.xenproject.org \ --cc=xinhui.pan@linux.vnet.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.