All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles
@ 2016-06-20 14:58 Paolo Bonzini
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
                   ` (6 more replies)
  0 siblings, 7 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-20 14:58 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H . Peter Anvin,
	Ingo Molnar, Thomas Gleixner

The first patches are the two optimizations I posted on May 30th
for the system call entry/exit code.  The only change is in the
function names, which use the user_{enter,exit}_irqoff favored
by Andy and Ingo.  The first patch matches what commit d0e536d8939
("context_tracking: avoid irq_save/irq_restore on guest entry and exit",
2015-10-28) did for guest entry and exit.  The second simply adds
an inline annotation; the compiler doesn't figure it out because the
function is not static.

The second two patches move guest_{enter,exit} to the same naming
convention, removing the KVM wrappers kvm_guest_{enter,exit} and
__kvm_guest_{enter,exit} in the process.  I would like these two to
go through the KVM tree because I have other optimizations for 4.8
on top of these patches.

Thanks,

Paolo

Paolo Bonzini (4):
  x86/entry: Avoid interrupt flag save and restore
  x86/entry: Inline enter_from_user_mode
  context_tracking: move rcu_virt_note_context_switch out of kvm_host.h
  KVM: remove kvm_guest_enter/exit wrappers

 arch/arm/kvm/arm.c               |  8 +++---
 arch/mips/kvm/mips.c             |  4 +--
 arch/powerpc/kvm/book3s_hv.c     |  4 +--
 arch/powerpc/kvm/book3s_pr.c     |  4 +--
 arch/powerpc/kvm/booke.c         |  4 +--
 arch/powerpc/kvm/powerpc.c       |  2 +-
 arch/s390/kvm/kvm-s390.c         |  4 +--
 arch/x86/entry/common.c          |  6 ++---
 arch/x86/kvm/x86.c               |  4 +--
 include/linux/context_tracking.h | 53 +++++++++++++++++++++++++++++++++++++---
 include/linux/kvm_host.h         | 39 -----------------------------
 11 files changed, 69 insertions(+), 63 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
@ 2016-06-20 14:58 ` Paolo Bonzini
  2016-06-20 20:21   ` Rik van Riel
                     ` (3 more replies)
  2016-06-20 14:58 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
                   ` (5 subsequent siblings)
  6 siblings, 4 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-20 14:58 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H. Peter Anvin,
	Ingo Molnar, Thomas Gleixner

Thanks to all the work that was done by Andy Lutomirski and others,
enter_from_user_mode and prepare_exit_to_usermode are now called only with
interrupts disabled.  Let's provide them a version of user_enter/user_exit
that skips saving and restoring the interrupt flag.

On an AMD-based machine I tested this patch on, with force-enabled
context tracking, the speed-up in system calls was 90 clock cycles or 6%,
measured with the following simple benchmark:

    #include <sys/signal.h>
    #include <time.h>
    #include <unistd.h>
    #include <stdio.h>

    unsigned long rdtsc()
    {
        unsigned long result;
        asm volatile("rdtsc; shl $32, %%rdx; mov %%eax, %%eax\n"
                     "or %%rdx, %%rax" : "=a" (result) : : "rdx");
        return result;
    }

    int main()
    {
        unsigned long tsc1, tsc2;
        int pid = getpid();
        int i;

        tsc1 = rdtsc();
        for (i = 0; i < 100000000; i++)
            kill(pid, SIGWINCH);
        tsc2 = rdtsc();

        printf("%ld\n", tsc2 - tsc1);
    }

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/entry/common.c          |  4 ++--
 include/linux/context_tracking.h | 15 +++++++++++++++
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index ec138e538c44..618bc61d35b7 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -43,7 +43,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 __visible void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
-	user_exit();
+	user_exit_irqoff();
 }
 #else
 static inline void enter_from_user_mode(void) {}
@@ -274,7 +274,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 	ti->status &= ~TS_COMPAT;
 #endif
 
-	user_enter();
+	user_enter_irqoff();
 }
 
 #define SYSCALL_EXIT_WORK_FLAGS				\
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d259274238db..d9aef2a0ec8e 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -31,6 +31,19 @@ static inline void user_exit(void)
 		context_tracking_exit(CONTEXT_USER);
 }
 
+/* Called with interrupts disabled.  */
+static inline void user_enter_irqoff(void)
+{
+	if (context_tracking_is_enabled())
+		__context_tracking_enter(CONTEXT_USER);
+
+}
+static inline void user_exit_irqoff(void)
+{
+	if (context_tracking_is_enabled())
+		__context_tracking_exit(CONTEXT_USER);
+}
+
 static inline enum ctx_state exception_enter(void)
 {
 	enum ctx_state prev_ctx;
@@ -69,6 +82,8 @@ static inline enum ctx_state ct_state(void)
 #else
 static inline void user_enter(void) { }
 static inline void user_exit(void) { }
+static inline void user_enter_irqoff(void) { }
+static inline void user_exit_irqoff(void) { }
 static inline enum ctx_state exception_enter(void) { return 0; }
 static inline void exception_exit(enum ctx_state prev_ctx) { }
 static inline enum ctx_state ct_state(void) { return CONTEXT_DISABLED; }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
@ 2016-06-20 14:58 ` Paolo Bonzini
  2016-06-20 20:22   ` Rik van Riel
                     ` (2 more replies)
  2016-06-20 14:58 ` [PATCH 3/2] context_tracking: move rcu_virt_note_context_switch out of kvm_host.h Paolo Bonzini
                   ` (4 subsequent siblings)
  6 siblings, 3 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-20 14:58 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H. Peter Anvin,
	Ingo Molnar, Thomas Gleixner

This matches what is already done for prepare_exit_to_usermode,
and saves about 60 clock cycles (4% speedup) with the benchmark
in the previous commit message.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/entry/common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 618bc61d35b7..9e1e27d31c6d 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 
 #ifdef CONFIG_CONTEXT_TRACKING
 /* Called on entry from user mode with IRQs off. */
-__visible void enter_from_user_mode(void)
+__visible inline void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
 	user_exit_irqoff();
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 3/2] context_tracking: move rcu_virt_note_context_switch out of kvm_host.h
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
  2016-06-20 14:58 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
@ 2016-06-20 14:58 ` Paolo Bonzini
  2016-06-20 20:23   ` Rik van Riel
  2016-06-20 14:58 ` [PATCH 4/2] KVM: remove kvm_guest_enter/exit wrappers Paolo Bonzini
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-20 14:58 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H. Peter Anvin,
	Ingo Molnar, Thomas Gleixner

Make kvm_guest_{enter,exit} and __kvm_guest_{enter,exit} trivial wrappers
around the code in context_tracking.h.  Name the context_tracking.h functions
consistently with what those for kernel<->user switch.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/linux/context_tracking.h | 38 ++++++++++++++++++++++++++++++++++----
 include/linux/kvm_host.h         | 25 ++++---------------------
 2 files changed, 38 insertions(+), 25 deletions(-)

diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d9aef2a0ec8e..c78fc27418f2 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -99,7 +99,8 @@ static inline void context_tracking_init(void) { }
 
 
 #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN
-static inline void guest_enter(void)
+/* must be called with irqs disabled */
+static inline void guest_enter_irqoff(void)
 {
 	if (vtime_accounting_cpu_enabled())
 		vtime_guest_enter(current);
@@ -108,9 +109,19 @@ static inline void guest_enter(void)
 
 	if (context_tracking_is_enabled())
 		__context_tracking_enter(CONTEXT_GUEST);
+
+	/* KVM does not hold any references to rcu protected data when it
+	 * switches CPU into a guest mode. In fact switching to a guest mode
+	 * is very similar to exiting to userspace from rcu point of view. In
+	 * addition CPU may stay in a guest mode for quite a long time (up to
+	 * one time slice). Lets treat guest mode as quiescent state, just like
+	 * we do with user-mode execution.
+	 */
+	if (!context_tracking_cpu_is_enabled())
+		rcu_virt_note_context_switch(smp_processor_id());
 }
 
-static inline void guest_exit(void)
+static inline void guest_exit_irqoff(void)
 {
 	if (context_tracking_is_enabled())
 		__context_tracking_exit(CONTEXT_GUEST);
@@ -122,7 +133,7 @@ static inline void guest_exit(void)
 }
 
 #else
-static inline void guest_enter(void)
+static inline void guest_enter_irqoff(void)
 {
 	/*
 	 * This is running in ioctl context so its safe
@@ -131,9 +142,10 @@ static inline void guest_enter(void)
 	 */
 	vtime_account_system(current);
 	current->flags |= PF_VCPU;
+	rcu_virt_note_context_switch(smp_processor_id());
 }
 
-static inline void guest_exit(void)
+static inline void guest_exit_irqoff(void)
 {
 	/* Flush the guest cputime we spent on the guest */
 	vtime_account_system(current);
@@ -141,4 +153,22 @@ static inline void guest_exit(void)
 }
 #endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
 
+static inline void guest_enter(void)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	guest_enter_irqoff();
+	local_irq_restore(flags);
+}
+
+static inline void guest_exit(void)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	guest_exit_irqoff();
+	local_irq_restore(flags);
+}
+
 #endif
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1c9c973a7dd9..30c9224545b1 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -870,40 +870,23 @@ static inline void kvm_iommu_unmap_pages(struct kvm *kvm,
 /* must be called with irqs disabled */
 static inline void __kvm_guest_enter(void)
 {
-	guest_enter();
-	/* KVM does not hold any references to rcu protected data when it
-	 * switches CPU into a guest mode. In fact switching to a guest mode
-	 * is very similar to exiting to userspace from rcu point of view. In
-	 * addition CPU may stay in a guest mode for quite a long time (up to
-	 * one time slice). Lets treat guest mode as quiescent state, just like
-	 * we do with user-mode execution.
-	 */
-	if (!context_tracking_cpu_is_enabled())
-		rcu_virt_note_context_switch(smp_processor_id());
+	guest_enter_irqoff();
 }
 
 /* must be called with irqs disabled */
 static inline void __kvm_guest_exit(void)
 {
-	guest_exit();
+	guest_exit_irqoff();
 }
 
 static inline void kvm_guest_enter(void)
 {
-	unsigned long flags;
-
-	local_irq_save(flags);
-	__kvm_guest_enter();
-	local_irq_restore(flags);
+	guest_enter();
 }
 
 static inline void kvm_guest_exit(void)
 {
-	unsigned long flags;
-
-	local_irq_save(flags);
-	__kvm_guest_exit();
-	local_irq_restore(flags);
+	guest_exit();
 }
 
 /*
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 4/2] KVM: remove kvm_guest_enter/exit wrappers
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
                   ` (2 preceding siblings ...)
  2016-06-20 14:58 ` [PATCH 3/2] context_tracking: move rcu_virt_note_context_switch out of kvm_host.h Paolo Bonzini
@ 2016-06-20 14:58 ` Paolo Bonzini
  2016-06-20 20:24   ` Rik van Riel
  2016-06-21 13:24 ` [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Christian Borntraeger
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-20 14:58 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H. Peter Anvin,
	Ingo Molnar, Thomas Gleixner

Use the functions from context_tracking.h directly.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/arm/kvm/arm.c           |  8 ++++----
 arch/mips/kvm/mips.c         |  4 ++--
 arch/powerpc/kvm/book3s_hv.c |  4 ++--
 arch/powerpc/kvm/book3s_pr.c |  4 ++--
 arch/powerpc/kvm/booke.c     |  4 ++--
 arch/powerpc/kvm/powerpc.c   |  2 +-
 arch/s390/kvm/kvm-s390.c     |  4 ++--
 arch/x86/kvm/x86.c           |  4 ++--
 include/linux/kvm_host.h     | 22 ----------------------
 9 files changed, 17 insertions(+), 39 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 893941ec98dc..96b473c745a6 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -615,7 +615,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * Enter the guest
 		 */
 		trace_kvm_entry(*vcpu_pc(vcpu));
-		__kvm_guest_enter();
+		guest_enter_irqoff();
 		vcpu->mode = IN_GUEST_MODE;
 
 		ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
@@ -641,14 +641,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		local_irq_enable();
 
 		/*
-		 * We do local_irq_enable() before calling kvm_guest_exit() so
+		 * We do local_irq_enable() before calling guest_exit() so
 		 * that if a timer interrupt hits while running the guest we
 		 * account that tick as being spent in the guest.  We enable
-		 * preemption after calling kvm_guest_exit() so that if we get
+		 * preemption after calling guest_exit() so that if we get
 		 * preempted we make sure ticks after that is not counted as
 		 * guest time.
 		 */
-		kvm_guest_exit();
+		guest_exit();
 		trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
 
 		/*
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index dc052fb5c7a2..7b3f438802db 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -399,7 +399,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	kvm_mips_deliver_interrupts(vcpu,
 				    kvm_read_c0_guest_cause(vcpu->arch.cop0));
 
-	__kvm_guest_enter();
+	guest_enter_irqoff();
 
 	/* Disable hardware page table walking while in guest */
 	htw_stop();
@@ -409,7 +409,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	/* Re-enable HTW before enabling interrupts */
 	htw_start();
 
-	__kvm_guest_exit();
+	guest_exit_irqoff();
 	local_irq_enable();
 
 	if (vcpu->sigset_active)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index e20beae5ca7a..6b2859c12ae8 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2522,7 +2522,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
 		list_for_each_entry(pvc, &core_info.vcs[sub], preempt_list)
 			spin_unlock(&pvc->lock);
 
-	kvm_guest_enter();
+	guest_enter();
 
 	srcu_idx = srcu_read_lock(&vc->kvm->srcu);
 
@@ -2570,7 +2570,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
 
 	/* make sure updates to secondary vcpu structs are visible now */
 	smp_mb();
-	kvm_guest_exit();
+	guest_exit();
 
 	for (sub = 0; sub < core_info.n_subcores; ++sub)
 		list_for_each_entry_safe(pvc, vcnext, &core_info.vcs[sub],
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 8e4f64f0b774..6a66c5ff0827 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -914,7 +914,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	/* We get here with MSR.EE=1 */
 
 	trace_kvm_exit(exit_nr, vcpu);
-	kvm_guest_exit();
+	guest_exit();
 
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
@@ -1531,7 +1531,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 
 	kvmppc_clear_debug(vcpu);
 
-	/* No need for kvm_guest_exit. It's done in handle_exit.
+	/* No need for guest_exit. It's done in handle_exit.
 	   We also get here with interrupts enabled. */
 
 	/* Make sure we save the guest FPU/Altivec/VSX state */
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 4afae695899a..02b4672f7347 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -776,7 +776,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 
 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 
-	/* No need for kvm_guest_exit. It's done in handle_exit.
+	/* No need for guest_exit. It's done in handle_exit.
 	   We also get here with interrupts enabled. */
 
 	/* Switch back to user space debug context */
@@ -1012,7 +1012,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	trace_kvm_exit(exit_nr, vcpu);
-	__kvm_guest_exit();
+	guest_exit_irqoff();
 
 	local_irq_enable();
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 02416fea7653..1ac036e45ed4 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -119,7 +119,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
 			continue;
 		}
 
-		__kvm_guest_enter();
+		guest_enter_irqoff();
 		return 1;
 	}
 
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 6d8ec3ac9dd8..6d054561f3a6 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2361,14 +2361,14 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 		 * guest_enter and guest_exit should be no uaccess.
 		 */
 		local_irq_disable();
-		__kvm_guest_enter();
+		guest_enter_irqoff();
 		__disable_cpu_timer_accounting(vcpu);
 		local_irq_enable();
 		exit_reason = sie64a(vcpu->arch.sie_block,
 				     vcpu->run->s.regs.gprs);
 		local_irq_disable();
 		__enable_cpu_timer_accounting(vcpu);
-		__kvm_guest_exit();
+		guest_exit_irqoff();
 		local_irq_enable();
 		vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 902d9da12392..12d4f1fb568c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6636,7 +6636,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	trace_kvm_entry(vcpu->vcpu_id);
 	wait_lapic_expire(vcpu);
-	__kvm_guest_enter();
+	guest_enter_irqoff();
 
 	if (unlikely(vcpu->arch.switch_db_regs)) {
 		set_debugreg(0, 7);
@@ -6695,7 +6695,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 */
 	barrier();
 
-	kvm_guest_exit();
+	guest_exit();
 
 	preempt_enable();
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 30c9224545b1..6fe786901df1 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -867,28 +867,6 @@ static inline void kvm_iommu_unmap_pages(struct kvm *kvm,
 }
 #endif
 
-/* must be called with irqs disabled */
-static inline void __kvm_guest_enter(void)
-{
-	guest_enter_irqoff();
-}
-
-/* must be called with irqs disabled */
-static inline void __kvm_guest_exit(void)
-{
-	guest_exit_irqoff();
-}
-
-static inline void kvm_guest_enter(void)
-{
-	guest_enter();
-}
-
-static inline void kvm_guest_exit(void)
-{
-	guest_exit();
-}
-
 /*
  * search_memslots() and __gfn_to_memslot() are here because they are
  * used in non-modular code in arch/powerpc/kvm/book3s_hv_rm_mmu.c.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
@ 2016-06-20 20:21   ` Rik van Riel
  2016-06-20 20:34   ` Andy Lutomirski
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2016-06-20 20:21 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, H. Peter Anvin, Ingo Molnar,
	Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 1701 bytes --]

On Mon, 2016-06-20 at 16:58 +0200, Paolo Bonzini wrote:
> Thanks to all the work that was done by Andy Lutomirski and others,
> enter_from_user_mode and prepare_exit_to_usermode are now called only
> with
> interrupts disabled.  Let's provide them a version of
> user_enter/user_exit
> that skips saving and restoring the interrupt flag.
> 
> On an AMD-based machine I tested this patch on, with force-enabled
> context tracking, the speed-up in system calls was 90 clock cycles or
> 6%,
> measured with the following simple benchmark:
> 
>     #include <sys/signal.h>
>     #include <time.h>
>     #include <unistd.h>
>     #include <stdio.h>
> 
>     unsigned long rdtsc()
>     {
>         unsigned long result;
>         asm volatile("rdtsc; shl $32, %%rdx; mov %%eax, %%eax\n"
>                      "or %%rdx, %%rax" : "=a" (result) : : "rdx");
>         return result;
>     }
> 
>     int main()
>     {
>         unsigned long tsc1, tsc2;
>         int pid = getpid();
>         int i;
> 
>         tsc1 = rdtsc();
>         for (i = 0; i < 100000000; i++)
>             kill(pid, SIGWINCH);
>         tsc2 = rdtsc();
> 
>         printf("%ld\n", tsc2 - tsc1);
>     }
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All Rights Reversed.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-06-20 14:58 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
@ 2016-06-20 20:22   ` Rik van Riel
  2016-07-09 11:53   ` [tip:x86/asm] x86/entry: Inline enter_from_user_mode() tip-bot for Paolo Bonzini
  2016-07-10 11:38   ` tip-bot for Paolo Bonzini
  2 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2016-06-20 20:22 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, H. Peter Anvin, Ingo Molnar,
	Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 610 bytes --]

On Mon, 2016-06-20 at 16:58 +0200, Paolo Bonzini wrote:
> This matches what is already done for prepare_exit_to_usermode,
> and saves about 60 clock cycles (4% speedup) with the benchmark
> in the previous commit message.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 

Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All Rights Reversed.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/2] context_tracking: move rcu_virt_note_context_switch out of kvm_host.h
  2016-06-20 14:58 ` [PATCH 3/2] context_tracking: move rcu_virt_note_context_switch out of kvm_host.h Paolo Bonzini
@ 2016-06-20 20:23   ` Rik van Riel
  0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2016-06-20 20:23 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, H. Peter Anvin, Ingo Molnar,
	Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 764 bytes --]

On Mon, 2016-06-20 at 16:58 +0200, Paolo Bonzini wrote:
> Make kvm_guest_{enter,exit} and __kvm_guest_{enter,exit} trivial
> wrappers
> around the code in context_tracking.h.  Name the context_tracking.h
> functions
> consistently with what those for kernel<->user switch.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 

Yay. I remember being confused by this when going
through the context tracking code last year.

Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All Rights Reversed.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/2] KVM: remove kvm_guest_enter/exit wrappers
  2016-06-20 14:58 ` [PATCH 4/2] KVM: remove kvm_guest_enter/exit wrappers Paolo Bonzini
@ 2016-06-20 20:24   ` Rik van Riel
  0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2016-06-20 20:24 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, H. Peter Anvin, Ingo Molnar,
	Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 494 bytes --]

On Mon, 2016-06-20 at 16:58 +0200, Paolo Bonzini wrote:
> Use the functions from context_tracking.h directly.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All Rights Reversed.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
  2016-06-20 20:21   ` Rik van Riel
@ 2016-06-20 20:34   ` Andy Lutomirski
  2016-07-09 11:52   ` [tip:x86/asm] " tip-bot for Paolo Bonzini
  2016-07-10 11:37   ` tip-bot for Paolo Bonzini
  3 siblings, 0 replies; 25+ messages in thread
From: Andy Lutomirski @ 2016-06-20 20:34 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm list, Andy Lutomirski, Peter Zijlstra,
	Rik van Riel, H. Peter Anvin, Ingo Molnar, Thomas Gleixner

On Mon, Jun 20, 2016 at 7:58 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Thanks to all the work that was done by Andy Lutomirski and others,
> enter_from_user_mode and prepare_exit_to_usermode are now called only with
> interrupts disabled.  Let's provide them a version of user_enter/user_exit
> that skips saving and restoring the interrupt flag.

You're also skipping the in_interrupt() check, but that appears to be fine.

Reviewed-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
                   ` (3 preceding siblings ...)
  2016-06-20 14:58 ` [PATCH 4/2] KVM: remove kvm_guest_enter/exit wrappers Paolo Bonzini
@ 2016-06-21 13:24 ` Christian Borntraeger
  2016-06-21 13:26   ` Paolo Bonzini
  2016-06-28 12:16 ` Paolo Bonzini
  2016-07-06 13:47 ` Paolo Bonzini
  6 siblings, 1 reply; 25+ messages in thread
From: Christian Borntraeger @ 2016-06-21 13:24 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H . Peter Anvin,
	Ingo Molnar, Thomas Gleixner

On 06/20/2016 04:58 PM, Paolo Bonzini wrote:
> The first patches are the two optimizations I posted on May 30th
> for the system call entry/exit code.  The only change is in the
> function names, which use the user_{enter,exit}_irqoff favored
> by Andy and Ingo.  The first patch matches what commit d0e536d8939
> ("context_tracking: avoid irq_save/irq_restore on guest entry and exit",
> 2015-10-28) did for guest entry and exit.  The second simply adds
> an inline annotation; the compiler doesn't figure it out because the
> function is not static.
> 
> The second two patches move guest_{enter,exit} to the same naming
> convention, removing the KVM wrappers kvm_guest_{enter,exit} and
> __kvm_guest_{enter,exit} in the process.  I would like these two to
> go through the KVM tree because I have other optimizations for 4.8
> on top of these patches.
> 
> Thanks,
> 
> Paolo
> 
> Paolo Bonzini (4):
>   x86/entry: Avoid interrupt flag save and restore
>   x86/entry: Inline enter_from_user_mode
>   context_tracking: move rcu_virt_note_context_switch out of kvm_host.h
>   KVM: remove kvm_guest_enter/exit wrappers
> 
>  arch/arm/kvm/arm.c               |  8 +++---
>  arch/mips/kvm/mips.c             |  4 +--
>  arch/powerpc/kvm/book3s_hv.c     |  4 +--
>  arch/powerpc/kvm/book3s_pr.c     |  4 +--
>  arch/powerpc/kvm/booke.c         |  4 +--
>  arch/powerpc/kvm/powerpc.c       |  2 +-
>  arch/s390/kvm/kvm-s390.c         |  4 +--
>  arch/x86/entry/common.c          |  6 ++---
>  arch/x86/kvm/x86.c               |  4 +--
>  include/linux/context_tracking.h | 53 +++++++++++++++++++++++++++++++++++++---
>  include/linux/kvm_host.h         | 39 -----------------------------
>  11 files changed, 69 insertions(+), 63 deletions(-)
> 
Series looks sane and does work on s390.
It has a minor conflict with my vsie pull request (so either add vsie.c
to this patch set or fixup my pull request in the merge commit to replace
kvm_guest_exit/enter with the new functions.

Christian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles
  2016-06-21 13:24 ` [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Christian Borntraeger
@ 2016-06-21 13:26   ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-21 13:26 UTC (permalink / raw)
  To: Christian Borntraeger, linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H . Peter Anvin,
	Ingo Molnar, Thomas Gleixner



On 21/06/2016 15:24, Christian Borntraeger wrote:
> Series looks sane and does work on s390.
> It has a minor conflict with my vsie pull request (so either add vsie.c
> to this patch set or fixup my pull request in the merge commit to replace
> kvm_guest_exit/enter with the new functions.

It should not be an issue if I take the last two patches as proposed.

Paolo

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
                   ` (4 preceding siblings ...)
  2016-06-21 13:24 ` [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Christian Borntraeger
@ 2016-06-28 12:16 ` Paolo Bonzini
  2016-07-06 13:47 ` Paolo Bonzini
  6 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-28 12:16 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H . Peter Anvin,
	Ingo Molnar, Thomas Gleixner



On 20/06/2016 16:58, Paolo Bonzini wrote:
> The first patches are the two optimizations I posted on May 30th
> for the system call entry/exit code.  The only change is in the
> function names, which use the user_{enter,exit}_irqoff favored
> by Andy and Ingo.  The first patch matches what commit d0e536d8939
> ("context_tracking: avoid irq_save/irq_restore on guest entry and exit",
> 2015-10-28) did for guest entry and exit.  The second simply adds
> an inline annotation; the compiler doesn't figure it out because the
> function is not static.
> 
> The second two patches move guest_{enter,exit} to the same naming
> convention, removing the KVM wrappers kvm_guest_{enter,exit} and
> __kvm_guest_{enter,exit} in the process.  I would like these two to
> go through the KVM tree because I have other optimizations for 4.8
> on top of these patches.

I'm applying patches 3 and 4 to the KVM tree.

Paolo

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles
  2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
                   ` (5 preceding siblings ...)
  2016-06-28 12:16 ` Paolo Bonzini
@ 2016-07-06 13:47 ` Paolo Bonzini
  6 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-07-06 13:47 UTC (permalink / raw)
  To: linux-kernel, kvm, Ingo Molnar
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H . Peter Anvin,
	Thomas Gleixner



On 20/06/2016 16:58, Paolo Bonzini wrote:
> The first patches are the two optimizations I posted on May 30th
> for the system call entry/exit code.  The only change is in the
> function names, which use the user_{enter,exit}_irqoff favored
> by Andy and Ingo.  The first patch matches what commit d0e536d8939
> ("context_tracking: avoid irq_save/irq_restore on guest entry and exit",
> 2015-10-28) did for guest entry and exit.  The second simply adds
> an inline annotation; the compiler doesn't figure it out because the
> function is not static.
> 
> The second two patches move guest_{enter,exit} to the same naming
> convention, removing the KVM wrappers kvm_guest_{enter,exit} and
> __kvm_guest_{enter,exit} in the process.  I would like these two to
> go through the KVM tree because I have other optimizations for 4.8
> on top of these patches.
> 
> Thanks,

Ingo,

ping for these two patches:

http://article.gmane.org/gmane.linux.kernel/2248541/raw
http://article.gmane.org/gmane.comp.emulators.kvm.devel/153909/raw

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [tip:x86/asm] x86/entry: Avoid interrupt flag save and restore
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
  2016-06-20 20:21   ` Rik van Riel
  2016-06-20 20:34   ` Andy Lutomirski
@ 2016-07-09 11:52   ` tip-bot for Paolo Bonzini
  2016-07-10 11:37   ` tip-bot for Paolo Bonzini
  3 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Paolo Bonzini @ 2016-07-09 11:52 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jpoimboe, riel, bp, luto, torvalds, linux-kernel, dvlasenk,
	mingo, peterz, hpa, brgerst, pbonzini, tglx

Commit-ID:  0b95364f977c180e1f336e00273fda5d3eca54b4
Gitweb:     http://git.kernel.org/tip/0b95364f977c180e1f336e00273fda5d3eca54b4
Author:     Paolo Bonzini <pbonzini@redhat.com>
AuthorDate: Mon, 20 Jun 2016 16:58:29 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 9 Jul 2016 10:44:01 +0200

x86/entry: Avoid interrupt flag save and restore

Thanks to all the work that was done by Andy Lutomirski and others,
enter_from_user_mode() and prepare_exit_to_usermode() are now called only with
interrupts disabled.  Let's provide them a version of user_enter()/user_exit()
that skips saving and restoring the interrupt flag.

On an AMD-based machine I tested this patch on, with force-enabled
context tracking, the speed-up in system calls was 90 clock cycles or 6%,
measured with the following simple benchmark:

    #include <sys/signal.h>
    #include <time.h>
    #include <unistd.h>
    #include <stdio.h>

    unsigned long rdtsc()
    {
        unsigned long result;
        asm volatile("rdtsc; shl $32, %%rdx; mov %%eax, %%eax\n"
                     "or %%rdx, %%rax" : "=a" (result) : : "rdx");
        return result;
    }

    int main()
    {
        unsigned long tsc1, tsc2;
        int pid = getpid();
        int i;

        tsc1 = rdtsc();
        for (i = 0; i < 100000000; i++)
            kill(pid, SIGWINCH);
        tsc2 = rdtsc();

        printf("%ld\n", tsc2 - tsc1);
    }

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm@vger.kernel.org
Link: http://lkml.kernel.org/r/1466434712-31440-2-git-send-email-pbonzini@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/entry/common.c          |  4 ++--
 include/linux/context_tracking.h | 15 +++++++++++++++
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index ec138e5..618bc61 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -43,7 +43,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 __visible void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
-	user_exit();
+	user_exit_irqoff();
 }
 #else
 static inline void enter_from_user_mode(void) {}
@@ -274,7 +274,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 	ti->status &= ~TS_COMPAT;
 #endif
 
-	user_enter();
+	user_enter_irqoff();
 }
 
 #define SYSCALL_EXIT_WORK_FLAGS				\
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d259274..d9aef2a 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -31,6 +31,19 @@ static inline void user_exit(void)
 		context_tracking_exit(CONTEXT_USER);
 }
 
+/* Called with interrupts disabled.  */
+static inline void user_enter_irqoff(void)
+{
+	if (context_tracking_is_enabled())
+		__context_tracking_enter(CONTEXT_USER);
+
+}
+static inline void user_exit_irqoff(void)
+{
+	if (context_tracking_is_enabled())
+		__context_tracking_exit(CONTEXT_USER);
+}
+
 static inline enum ctx_state exception_enter(void)
 {
 	enum ctx_state prev_ctx;
@@ -69,6 +82,8 @@ static inline enum ctx_state ct_state(void)
 #else
 static inline void user_enter(void) { }
 static inline void user_exit(void) { }
+static inline void user_enter_irqoff(void) { }
+static inline void user_exit_irqoff(void) { }
 static inline enum ctx_state exception_enter(void) { return 0; }
 static inline void exception_exit(enum ctx_state prev_ctx) { }
 static inline enum ctx_state ct_state(void) { return CONTEXT_DISABLED; }

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:x86/asm] x86/entry: Inline enter_from_user_mode()
  2016-06-20 14:58 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
  2016-06-20 20:22   ` Rik van Riel
@ 2016-07-09 11:53   ` tip-bot for Paolo Bonzini
  2016-07-09 13:38     ` Borislav Petkov
  2016-07-10 11:38   ` tip-bot for Paolo Bonzini
  2 siblings, 1 reply; 25+ messages in thread
From: tip-bot for Paolo Bonzini @ 2016-07-09 11:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, peterz, riel, bp, torvalds, hpa, mingo, pbonzini,
	luto, tglx, brgerst, dvlasenk, jpoimboe

Commit-ID:  eec4b1227db153ca16f8f5f285d01fefdce05438
Gitweb:     http://git.kernel.org/tip/eec4b1227db153ca16f8f5f285d01fefdce05438
Author:     Paolo Bonzini <pbonzini@redhat.com>
AuthorDate: Mon, 20 Jun 2016 16:58:30 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 9 Jul 2016 10:44:02 +0200

x86/entry: Inline enter_from_user_mode()

This matches what is already done for prepare_exit_to_usermode(),
and saves about 60 clock cycles (4% speedup) with the benchmark
in the previous commit message.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm@vger.kernel.org
Link: http://lkml.kernel.org/r/1466434712-31440-3-git-send-email-pbonzini@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/entry/common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 618bc61..9e1e27d 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 
 #ifdef CONFIG_CONTEXT_TRACKING
 /* Called on entry from user mode with IRQs off. */
-__visible void enter_from_user_mode(void)
+__visible inline void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
 	user_exit_irqoff();

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [tip:x86/asm] x86/entry: Inline enter_from_user_mode()
  2016-07-09 11:53   ` [tip:x86/asm] x86/entry: Inline enter_from_user_mode() tip-bot for Paolo Bonzini
@ 2016-07-09 13:38     ` Borislav Petkov
  2016-07-10 11:33       ` Ingo Molnar
  0 siblings, 1 reply; 25+ messages in thread
From: Borislav Petkov @ 2016-07-09 13:38 UTC (permalink / raw)
  To: linux-tip-commits, tip-bot for Paolo Bonzini
  Cc: linux-kernel, peterz, riel, torvalds, hpa, mingo, pbonzini, luto,
	tglx, brgerst, dvlasenk, jpoimboe

tip-bot for Paolo Bonzini <tipbot@zytor.com> wrote:

>Commit-ID:  eec4b1227db153ca16f8f5f285d01fefdce05438
>Gitweb:    
>http://git.kernel.org/tip/eec4b1227db153ca16f8f5f285d01fefdce05438
>Author:     Paolo Bonzini <pbonzini@redhat.com>
>AuthorDate: Mon, 20 Jun 2016 16:58:30 +0200
>Committer:  Ingo Molnar <mingo@kernel.org>
>CommitDate: Sat, 9 Jul 2016 10:44:02 +0200
>
>x86/entry: Inline enter_from_user_mode()
>
>This matches what is already done for prepare_exit_to_usermode(),
>and saves about 60 clock cycles (4% speedup) with the benchmark
>in the previous commit message.
>
>Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>Reviewed-by: Rik van Riel <riel@redhat.com>
>Reviewed-by: Andy Lutomirski <luto@kernel.org>
>Reviewed-by: Rik van Riel <riel@redhat.com>
>Reviewed-by: Andy Lutomirski <luto@kernel.org>
>Reviewed-by: Rik van Riel <riel@redhat.com>
>Reviewed-by: Andy Lutomirski <luto@kernel.org>
>Reviewed-by: Rik van Riel <riel@redhat.com>
>Reviewed-by: Andy Lutomirski <luto@kernel.org>
>Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Woohaa, if that amount of review doesn't get this patch upstream I don't know what will ;-)))))

-- 
Sent from a small device: formatting sucks and brevity is inevitable. 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [tip:x86/asm] x86/entry: Inline enter_from_user_mode()
  2016-07-09 13:38     ` Borislav Petkov
@ 2016-07-10 11:33       ` Ingo Molnar
  0 siblings, 0 replies; 25+ messages in thread
From: Ingo Molnar @ 2016-07-10 11:33 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-tip-commits, tip-bot for Paolo Bonzini, linux-kernel,
	peterz, riel, torvalds, hpa, pbonzini, luto, tglx, brgerst,
	dvlasenk, jpoimboe


* Borislav Petkov <bp@alien8.de> wrote:

> tip-bot for Paolo Bonzini <tipbot@zytor.com> wrote:
> 
> >Commit-ID:  eec4b1227db153ca16f8f5f285d01fefdce05438
> >Gitweb:    
> >http://git.kernel.org/tip/eec4b1227db153ca16f8f5f285d01fefdce05438
> >Author:     Paolo Bonzini <pbonzini@redhat.com>
> >AuthorDate: Mon, 20 Jun 2016 16:58:30 +0200
> >Committer:  Ingo Molnar <mingo@kernel.org>
> >CommitDate: Sat, 9 Jul 2016 10:44:02 +0200
> >
> >x86/entry: Inline enter_from_user_mode()
> >
> >This matches what is already done for prepare_exit_to_usermode(),
> >and saves about 60 clock cycles (4% speedup) with the benchmark
> >in the previous commit message.
> >
> >Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >Reviewed-by: Rik van Riel <riel@redhat.com>
> >Reviewed-by: Andy Lutomirski <luto@kernel.org>
> >Reviewed-by: Rik van Riel <riel@redhat.com>
> >Reviewed-by: Andy Lutomirski <luto@kernel.org>
> >Reviewed-by: Rik van Riel <riel@redhat.com>
> >Reviewed-by: Andy Lutomirski <luto@kernel.org>
> >Reviewed-by: Rik van Riel <riel@redhat.com>
> >Reviewed-by: Andy Lutomirski <luto@kernel.org>
> >Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> Woohaa, if that amount of review doesn't get this patch upstream I don't know what will ;-)))))

Gah, that's a script gone bad - fixed!

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [tip:x86/asm] x86/entry: Avoid interrupt flag save and restore
  2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
                     ` (2 preceding siblings ...)
  2016-07-09 11:52   ` [tip:x86/asm] " tip-bot for Paolo Bonzini
@ 2016-07-10 11:37   ` tip-bot for Paolo Bonzini
  3 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Paolo Bonzini @ 2016-07-10 11:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, luto, mingo, pbonzini, brgerst, hpa, jpoimboe, bp,
	linux-kernel, riel, dvlasenk, peterz, torvalds

Commit-ID:  2e9d1e150abf88cb63e5d34ca286edbb95b4c53d
Gitweb:     http://git.kernel.org/tip/2e9d1e150abf88cb63e5d34ca286edbb95b4c53d
Author:     Paolo Bonzini <pbonzini@redhat.com>
AuthorDate: Mon, 20 Jun 2016 16:58:29 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 10 Jul 2016 13:33:02 +0200

x86/entry: Avoid interrupt flag save and restore

Thanks to all the work that was done by Andy Lutomirski and others,
enter_from_user_mode() and prepare_exit_to_usermode() are now called only with
interrupts disabled.  Let's provide them a version of user_enter()/user_exit()
that skips saving and restoring the interrupt flag.

On an AMD-based machine I tested this patch on, with force-enabled
context tracking, the speed-up in system calls was 90 clock cycles or 6%,
measured with the following simple benchmark:

    #include <sys/signal.h>
    #include <time.h>
    #include <unistd.h>
    #include <stdio.h>

    unsigned long rdtsc()
    {
        unsigned long result;
        asm volatile("rdtsc; shl $32, %%rdx; mov %%eax, %%eax\n"
                     "or %%rdx, %%rax" : "=a" (result) : : "rdx");
        return result;
    }

    int main()
    {
        unsigned long tsc1, tsc2;
        int pid = getpid();
        int i;

        tsc1 = rdtsc();
        for (i = 0; i < 100000000; i++)
            kill(pid, SIGWINCH);
        tsc2 = rdtsc();

        printf("%ld\n", tsc2 - tsc1);
    }

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm@vger.kernel.org
Link: http://lkml.kernel.org/r/1466434712-31440-2-git-send-email-pbonzini@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/entry/common.c          |  4 ++--
 include/linux/context_tracking.h | 15 +++++++++++++++
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index ec138e5..618bc61 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -43,7 +43,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 __visible void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
-	user_exit();
+	user_exit_irqoff();
 }
 #else
 static inline void enter_from_user_mode(void) {}
@@ -274,7 +274,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 	ti->status &= ~TS_COMPAT;
 #endif
 
-	user_enter();
+	user_enter_irqoff();
 }
 
 #define SYSCALL_EXIT_WORK_FLAGS				\
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d259274..d9aef2a 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -31,6 +31,19 @@ static inline void user_exit(void)
 		context_tracking_exit(CONTEXT_USER);
 }
 
+/* Called with interrupts disabled.  */
+static inline void user_enter_irqoff(void)
+{
+	if (context_tracking_is_enabled())
+		__context_tracking_enter(CONTEXT_USER);
+
+}
+static inline void user_exit_irqoff(void)
+{
+	if (context_tracking_is_enabled())
+		__context_tracking_exit(CONTEXT_USER);
+}
+
 static inline enum ctx_state exception_enter(void)
 {
 	enum ctx_state prev_ctx;
@@ -69,6 +82,8 @@ static inline enum ctx_state ct_state(void)
 #else
 static inline void user_enter(void) { }
 static inline void user_exit(void) { }
+static inline void user_enter_irqoff(void) { }
+static inline void user_exit_irqoff(void) { }
 static inline enum ctx_state exception_enter(void) { return 0; }
 static inline void exception_exit(enum ctx_state prev_ctx) { }
 static inline enum ctx_state ct_state(void) { return CONTEXT_DISABLED; }

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:x86/asm] x86/entry: Inline enter_from_user_mode()
  2016-06-20 14:58 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
  2016-06-20 20:22   ` Rik van Riel
  2016-07-09 11:53   ` [tip:x86/asm] x86/entry: Inline enter_from_user_mode() tip-bot for Paolo Bonzini
@ 2016-07-10 11:38   ` tip-bot for Paolo Bonzini
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Paolo Bonzini @ 2016-07-10 11:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: riel, brgerst, luto, pbonzini, dvlasenk, jpoimboe, bp, peterz,
	torvalds, mingo, linux-kernel, hpa, tglx

Commit-ID:  be8a18e2e98e04a5def5887d913b267865562448
Gitweb:     http://git.kernel.org/tip/be8a18e2e98e04a5def5887d913b267865562448
Author:     Paolo Bonzini <pbonzini@redhat.com>
AuthorDate: Mon, 20 Jun 2016 16:58:30 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 10 Jul 2016 13:33:02 +0200

x86/entry: Inline enter_from_user_mode()

This matches what is already done for prepare_exit_to_usermode(),
and saves about 60 clock cycles (4% speedup) with the benchmark
in the previous commit message.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm@vger.kernel.org
Link: http://lkml.kernel.org/r/1466434712-31440-3-git-send-email-pbonzini@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/entry/common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 618bc61..9e1e27d 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 
 #ifdef CONFIG_CONTEXT_TRACKING
 /* Called on entry from user mode with IRQs off. */
-__visible void enter_from_user_mode(void)
+__visible inline void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
 	user_exit_irqoff();

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-06-06 16:01     ` Paolo Bonzini
@ 2016-06-09 17:17       ` Andy Lutomirski
  0 siblings, 0 replies; 25+ messages in thread
From: Andy Lutomirski @ 2016-06-09 17:17 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Thomas Gleixner, linux-kernel, H. Peter Anvin, X86 ML,
	Rik van Riel, Peter Zijlstra, Ingo Molnar

On Mon, Jun 6, 2016 at 9:01 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 04/06/2016 07:08, Andy Lutomirski wrote:
>> On May 30, 2016 5:30 AM, "Paolo Bonzini" <pbonzini@redhat.com> wrote:
>>>
>>> This matches what is already done for prepare_exit_to_usermode,
>>> and saves about 60 clock cycles (4% speedup) with the benchmark
>>> in the previous commit message.
>>>
>>> Cc: Andy Lutomirski <luto@kernel.org>
>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>> Cc: Rik van Riel <riel@redhat.com>
>>> Cc: H. Peter Anvin <hpa@zytor.com>
>>> Cc: Ingo Molnar <mingo@kernel.org.com>
>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>> ---
>>>  arch/x86/entry/common.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>>> index 946bc1a..582bbc8 100644
>>> --- a/arch/x86/entry/common.c
>>> +++ b/arch/x86/entry/common.c
>>> @@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
>>>
>>>  #ifdef CONFIG_CONTEXT_TRACKING
>>>  /* Called on entry from user mode with IRQs off. */
>>> -__visible void enter_from_user_mode(void)
>>> +__visible inline void enter_from_user_mode(void)
>>>  {
>>>         CT_WARN_ON(ct_state() != CONTEXT_USER);
>>>         __user_exit();
>>
>> I wonder if an extern inline *declaration* is needed as well in this C
>> file.  At least C99 suggests it is.  Maybe __visible is sufficient to
>> force an external definition to be emitted.
>
> An extern inline declaration is not needed because the kernel uses
> -std=gnu89 (or, if you prefer, because prepare_exit_to_usermode didn't
> have one :)).
>
> It's awesomely perverted:
>
>   __attribute__((externally_visible)) inline void f(void) {}
>
>   inline void g(void) {}
>
>   extern inline void h(void);
>   extern inline void h(void) {}
>
>   inline void i(void);
>   inline void i(void) {}
>
>   extern inline void j(void);
>   inline void j(void) {}
>
> This patch (and the preexisting prepare_exit_to_usermode code) are
> equivalent to "f".
>
> Compile the above file with "--std=gnu89" or "--std=gnu99
> -fgnu89-inline" and f/g/i/j are emitted.
>
> Compile it with "--std=gnu99 -fno-gnu89-inline" and h/j is emitted.
>
> Yes, the standard is _almost exactly_ the opposite of the preexisting
> GCC implementation.  The only case which achieves the same effect is
> when declarations are "extern inline" and definitions must always be
> "inline".  Or of course just use "static inline".  At least it's
> decently documented in the GCC info documentation.

OK.

In any event, if this ever messes up, it'll fail to build and we'll notice.

--Andy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-06-04  5:08   ` Andy Lutomirski
@ 2016-06-06 16:01     ` Paolo Bonzini
  2016-06-09 17:17       ` Andy Lutomirski
  0 siblings, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2016-06-06 16:01 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Thomas Gleixner, linux-kernel, Ingo Molnar, H. Peter Anvin,
	X86 ML, Rik van Riel, Peter Zijlstra



On 04/06/2016 07:08, Andy Lutomirski wrote:
> On May 30, 2016 5:30 AM, "Paolo Bonzini" <pbonzini@redhat.com> wrote:
>>
>> This matches what is already done for prepare_exit_to_usermode,
>> and saves about 60 clock cycles (4% speedup) with the benchmark
>> in the previous commit message.
>>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Rik van Riel <riel@redhat.com>
>> Cc: H. Peter Anvin <hpa@zytor.com>
>> Cc: Ingo Molnar <mingo@kernel.org.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>  arch/x86/entry/common.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> index 946bc1a..582bbc8 100644
>> --- a/arch/x86/entry/common.c
>> +++ b/arch/x86/entry/common.c
>> @@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
>>
>>  #ifdef CONFIG_CONTEXT_TRACKING
>>  /* Called on entry from user mode with IRQs off. */
>> -__visible void enter_from_user_mode(void)
>> +__visible inline void enter_from_user_mode(void)
>>  {
>>         CT_WARN_ON(ct_state() != CONTEXT_USER);
>>         __user_exit();
> 
> I wonder if an extern inline *declaration* is needed as well in this C
> file.  At least C99 suggests it is.  Maybe __visible is sufficient to
> force an external definition to be emitted.

An extern inline declaration is not needed because the kernel uses
-std=gnu89 (or, if you prefer, because prepare_exit_to_usermode didn't
have one :)).

It's awesomely perverted:

  __attribute__((externally_visible)) inline void f(void) {}

  inline void g(void) {}

  extern inline void h(void);
  extern inline void h(void) {}

  inline void i(void);
  inline void i(void) {}

  extern inline void j(void);
  inline void j(void) {}

This patch (and the preexisting prepare_exit_to_usermode code) are
equivalent to "f".

Compile the above file with "--std=gnu89" or "--std=gnu99
-fgnu89-inline" and f/g/i/j are emitted.

Compile it with "--std=gnu99 -fno-gnu89-inline" and h/j is emitted.

Yes, the standard is _almost exactly_ the opposite of the preexisting
GCC implementation.  The only case which achieves the same effect is
when declarations are "extern inline" and definitions must always be
"inline".  Or of course just use "static inline".  At least it's
decently documented in the GCC info documentation.

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-05-30 12:30 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
  2016-06-01 14:54   ` Rik van Riel
@ 2016-06-04  5:08   ` Andy Lutomirski
  2016-06-06 16:01     ` Paolo Bonzini
  1 sibling, 1 reply; 25+ messages in thread
From: Andy Lutomirski @ 2016-06-04  5:08 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Thomas Gleixner, linux-kernel, Ingo Molnar, H. Peter Anvin,
	X86 ML, Rik van Riel, Peter Zijlstra

On May 30, 2016 5:30 AM, "Paolo Bonzini" <pbonzini@redhat.com> wrote:
>
> This matches what is already done for prepare_exit_to_usermode,
> and saves about 60 clock cycles (4% speedup) with the benchmark
> in the previous commit message.
>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/entry/common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index 946bc1a..582bbc8 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
>
>  #ifdef CONFIG_CONTEXT_TRACKING
>  /* Called on entry from user mode with IRQs off. */
> -__visible void enter_from_user_mode(void)
> +__visible inline void enter_from_user_mode(void)
>  {
>         CT_WARN_ON(ct_state() != CONTEXT_USER);
>         __user_exit();

I wonder if an extern inline *declaration* is needed as well in this C
file.  At least C99 suggests it is.  Maybe __visible is sufficient to
force an external definition to be emitted.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-05-30 12:30 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
@ 2016-06-01 14:54   ` Rik van Riel
  2016-06-04  5:08   ` Andy Lutomirski
  1 sibling, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2016-06-01 14:54 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, x86
  Cc: Andy Lutomirski, Peter Zijlstra, H. Peter Anvin, Ingo Molnar,
	Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 612 bytes --]

On Mon, 2016-05-30 at 14:30 +0200, Paolo Bonzini wrote:
> This matches what is already done for prepare_exit_to_usermode,
> and saves about 60 clock cycles (4% speedup) with the benchmark
> in the previous commit message.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All Rights Reversed.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 2/2] x86/entry: Inline enter_from_user_mode
  2016-05-30 12:30 [PATCH " Paolo Bonzini
@ 2016-05-30 12:30 ` Paolo Bonzini
  2016-06-01 14:54   ` Rik van Riel
  2016-06-04  5:08   ` Andy Lutomirski
  0 siblings, 2 replies; 25+ messages in thread
From: Paolo Bonzini @ 2016-05-30 12:30 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Andy Lutomirski, Peter Zijlstra, Rik van Riel, H. Peter Anvin,
	Ingo Molnar, Thomas Gleixner

This matches what is already done for prepare_exit_to_usermode,
and saves about 60 clock cycles (4% speedup) with the benchmark
in the previous commit message.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/entry/common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 946bc1a..582bbc8 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -40,7 +40,7 @@ static struct thread_info *pt_regs_to_thread_info(struct pt_regs *regs)
 
 #ifdef CONFIG_CONTEXT_TRACKING
 /* Called on entry from user mode with IRQs off. */
-__visible void enter_from_user_mode(void)
+__visible inline void enter_from_user_mode(void)
 {
 	CT_WARN_ON(ct_state() != CONTEXT_USER);
 	__user_exit();
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2016-07-10 11:39 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-20 14:58 [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Paolo Bonzini
2016-06-20 14:58 ` [PATCH 1/2] x86/entry: Avoid interrupt flag save and restore Paolo Bonzini
2016-06-20 20:21   ` Rik van Riel
2016-06-20 20:34   ` Andy Lutomirski
2016-07-09 11:52   ` [tip:x86/asm] " tip-bot for Paolo Bonzini
2016-07-10 11:37   ` tip-bot for Paolo Bonzini
2016-06-20 14:58 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
2016-06-20 20:22   ` Rik van Riel
2016-07-09 11:53   ` [tip:x86/asm] x86/entry: Inline enter_from_user_mode() tip-bot for Paolo Bonzini
2016-07-09 13:38     ` Borislav Petkov
2016-07-10 11:33       ` Ingo Molnar
2016-07-10 11:38   ` tip-bot for Paolo Bonzini
2016-06-20 14:58 ` [PATCH 3/2] context_tracking: move rcu_virt_note_context_switch out of kvm_host.h Paolo Bonzini
2016-06-20 20:23   ` Rik van Riel
2016-06-20 14:58 ` [PATCH 4/2] KVM: remove kvm_guest_enter/exit wrappers Paolo Bonzini
2016-06-20 20:24   ` Rik van Riel
2016-06-21 13:24 ` [PATCH v2 0/2] x86/entry: speed up context-tracking system calls by 150 clock cycles Christian Borntraeger
2016-06-21 13:26   ` Paolo Bonzini
2016-06-28 12:16 ` Paolo Bonzini
2016-07-06 13:47 ` Paolo Bonzini
  -- strict thread matches above, loose matches on Subject: below --
2016-05-30 12:30 [PATCH " Paolo Bonzini
2016-05-30 12:30 ` [PATCH 2/2] x86/entry: Inline enter_from_user_mode Paolo Bonzini
2016-06-01 14:54   ` Rik van Riel
2016-06-04  5:08   ` Andy Lutomirski
2016-06-06 16:01     ` Paolo Bonzini
2016-06-09 17:17       ` Andy Lutomirski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.