All of lore.kernel.org
 help / color / mirror / Atom feed
* [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup
@ 2014-04-29 13:36 Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception Christian Borntraeger
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, Christian Borntraeger

Paolo, Gleb,

please consider belows pull request for 3.16. 
Thanks

Christian


The following changes since commit 198c74f43f0f5473f99967aead30ddc622804bc1:

  KVM: MMU: flush tlb out of mmu lock when write-protect the sptes (2014-04-23 17:49:52 -0300)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git  tags/kvm-s390-20140429

for you to fetch changes up to 8ad357551797b1edc184fb9f6a4f80a6fa626459:

  KVM: s390: enable IBS for single running VCPUs (2014-04-29 15:01:54 +0200)

----------------------------------------------------------------
1. Guest handling fixes
The handling of MVPG, PFMF and Test Block is fixed to better follow
the architecture. None of these fixes is critical for any current
Linux guests, but let's play safe.

2. Optimization for single CPU guests
We can enable the IBS facility if only one VCPU is running (!STOPPED
state). We also enable this optimization for guest > 1 VCPU as soon
as all but one VCPU is in stopped state. Thus will help guests that
have tools like cpuplugd (from s390-utils) that do dynamic offline/
online of CPUs.

3. NOTES
There is one non-s390 change in include/linux/kvm_host.h that
introduces 2 defines for VCPU requests:
define KVM_REQ_ENABLE_IBS        23
define KVM_REQ_DISABLE_IBS       24

----------------------------------------------------------------
David Hildenbrand (2):
      KVM: s390: introduce kvm_s390_vcpu_{start,stop}
      KVM: s390: enable IBS for single running VCPUs

Thomas Huth (4):
      KVM: s390: Handle MVPG partial execution interception
      KVM: s390: Add a function for checking the low-address protection
      KVM: s390: Fixes for PFMF
      KVM: s390: Add low-address protection to TEST BLOCK

 arch/s390/include/asm/kvm_host.h |   2 +
 arch/s390/kvm/diag.c             |   2 +-
 arch/s390/kvm/gaccess.c          |  28 ++++++++
 arch/s390/kvm/gaccess.h          |   1 +
 arch/s390/kvm/intercept.c        |  58 +++++++++++++++-
 arch/s390/kvm/interrupt.c        |   2 +-
 arch/s390/kvm/kvm-s390.c         | 139 +++++++++++++++++++++++++++++++++++++--
 arch/s390/kvm/kvm-s390.h         |   2 +
 arch/s390/kvm/priv.c             |  21 ++++--
 arch/s390/kvm/trace-s390.h       |  43 ++++++++++++
 include/linux/kvm_host.h         |   2 +
 11 files changed, 287 insertions(+), 13 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
@ 2014-04-29 13:36 ` Christian Borntraeger
  2014-04-30  8:07   ` Heiko Carstens
  2014-04-29 13:36 ` [GIT PULL 2/6] KVM: s390: Add a function for checking the low-address protection Christian Borntraeger
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, Thomas Huth,
	Christian Borntraeger

From: Thomas Huth <thuth@linux.vnet.ibm.com>

When the guest executes the MVPG instruction with DAT disabled,
and the source or destination page is not mapped in the host,
the so-called partial execution interception occurs. We need to
handle this event by setting up a mapping for the corresponding
user pages.

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/intercept.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 54 insertions(+), 1 deletion(-)

diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index 30e1c5e..54313fe 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -1,7 +1,7 @@
 /*
  * in-kernel handling for sie intercepts
  *
- * Copyright IBM Corp. 2008, 2009
+ * Copyright IBM Corp. 2008, 2014
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License (version 2 only)
@@ -234,6 +234,58 @@ static int handle_instruction_and_prog(struct kvm_vcpu *vcpu)
 	return rc2;
 }
 
+/**
+ * Handle MOVE PAGE partial execution interception.
+ *
+ * This interception can only happen for guests with DAT disabled and
+ * addresses that are currently not mapped in the host. Thus we try to
+ * set up the mappings for the corresponding user pages here (or throw
+ * addressing exceptions in case of illegal guest addresses).
+ */
+static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
+{
+	unsigned long hostaddr, srcaddr, dstaddr;
+	psw_t *psw = &vcpu->arch.sie_block->gpsw;
+	struct mm_struct *mm = current->mm;
+	int reg1, reg2, rc;
+
+	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
+	srcaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg2]);
+	dstaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg1]);
+
+	/* Make sure that the source is paged-in */
+	hostaddr = gmap_fault(srcaddr, vcpu->arch.gmap);
+	if (IS_ERR_VALUE(hostaddr))
+		return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
+	down_read(&mm->mmap_sem);
+	rc = get_user_pages(current, mm, hostaddr, 1, 0, 0, NULL, NULL);
+	up_read(&mm->mmap_sem);
+	if (rc < 0)
+		return rc;
+
+	/* Make sure that the destination is paged-in */
+	hostaddr = gmap_fault(dstaddr, vcpu->arch.gmap);
+	if (IS_ERR_VALUE(hostaddr))
+		return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
+	down_read(&mm->mmap_sem);
+	rc = get_user_pages(current, mm, hostaddr, 1, 1, 0, NULL, NULL);
+	up_read(&mm->mmap_sem);
+	if (rc < 0)
+		return rc;
+
+	psw->addr = __rewind_psw(*psw, 4);
+
+	return 0;
+}
+
+static int handle_partial_execution(struct kvm_vcpu *vcpu)
+{
+	if (vcpu->arch.sie_block->ipa == 0xb254)	/* MVPG */
+		return handle_mvpg_pei(vcpu);
+
+	return -EOPNOTSUPP;
+}
+
 static const intercept_handler_t intercept_funcs[] = {
 	[0x00 >> 2] = handle_noop,
 	[0x04 >> 2] = handle_instruction,
@@ -245,6 +297,7 @@ static const intercept_handler_t intercept_funcs[] = {
 	[0x1C >> 2] = kvm_s390_handle_wait,
 	[0x20 >> 2] = handle_validity,
 	[0x28 >> 2] = handle_stop,
+	[0x38 >> 2] = handle_partial_execution,
 };
 
 int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu)
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [GIT PULL 2/6] KVM: s390: Add a function for checking the low-address protection
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception Christian Borntraeger
@ 2014-04-29 13:36 ` Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 3/6] KVM: s390: Fixes for PFMF Christian Borntraeger
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, Thomas Huth,
	Christian Borntraeger

From: Thomas Huth <thuth@linux.vnet.ibm.com>

The s390 architecture has a special protection mechanism that can
be used to prevent write access to the vital data in the low-core
memory area. This patch adds a new helper function that can be used
to check for such write accesses and in case of protection, it also
sets up the exception data accordingly.

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/gaccess.c | 28 ++++++++++++++++++++++++++++
 arch/s390/kvm/gaccess.h |  1 +
 2 files changed, 29 insertions(+)

diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 691fdb7..db608c3 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -643,3 +643,31 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 	}
 	return rc;
 }
+
+/**
+ * kvm_s390_check_low_addr_protection - check for low-address protection
+ * @ga: Guest address
+ *
+ * Checks whether an address is subject to low-address protection and set
+ * up vcpu->arch.pgm accordingly if necessary.
+ *
+ * Return: 0 if no protection exception, or PGM_PROTECTION if protected.
+ */
+int kvm_s390_check_low_addr_protection(struct kvm_vcpu *vcpu, unsigned long ga)
+{
+	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
+	psw_t *psw = &vcpu->arch.sie_block->gpsw;
+	struct trans_exc_code_bits *tec_bits;
+
+	if (!is_low_address(ga) || !low_address_protection_enabled(vcpu))
+		return 0;
+
+	memset(pgm, 0, sizeof(*pgm));
+	tec_bits = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
+	tec_bits->fsi = FSI_STORE;
+	tec_bits->as = psw_bits(*psw).as;
+	tec_bits->addr = ga >> PAGE_SHIFT;
+	pgm->code = PGM_PROTECTION;
+
+	return pgm->code;
+}
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index 1079c8f..68db43e 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -325,5 +325,6 @@ int read_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
 }
 
 int ipte_lock_held(struct kvm_vcpu *vcpu);
+int kvm_s390_check_low_addr_protection(struct kvm_vcpu *vcpu, unsigned long ga);
 
 #endif /* __KVM_S390_GACCESS_H */
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [GIT PULL 3/6] KVM: s390: Fixes for PFMF
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 2/6] KVM: s390: Add a function for checking the low-address protection Christian Borntraeger
@ 2014-04-29 13:36 ` Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 4/6] KVM: s390: Add low-address protection to TEST BLOCK Christian Borntraeger
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, Thomas Huth,
	Christian Borntraeger

From: Thomas Huth <thuth@linux.vnet.ibm.com>

Add a check for low-address protection to the PFMF handler and
convert real-addresses to absolute if necessary, as it is defined
in the Principles of Operations specification.

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/priv.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index 27f9051..a47157b 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -650,6 +650,11 @@ static int handle_pfmf(struct kvm_vcpu *vcpu)
 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
 
 	start = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
+	if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) {
+		if (kvm_s390_check_low_addr_protection(vcpu, start))
+			return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
+	}
+
 	switch (vcpu->run->s.regs.gprs[reg1] & PFMF_FSC) {
 	case 0x00000000:
 		end = (start + (1UL << 12)) & ~((1UL << 12) - 1);
@@ -665,10 +670,15 @@ static int handle_pfmf(struct kvm_vcpu *vcpu)
 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
 	}
 	while (start < end) {
-		unsigned long useraddr;
-
-		useraddr = gmap_translate(start, vcpu->arch.gmap);
-		if (IS_ERR((void *)useraddr))
+		unsigned long useraddr, abs_addr;
+
+		/* Translate guest address to host address */
+		if ((vcpu->run->s.regs.gprs[reg1] & PFMF_FSC) == 0)
+			abs_addr = kvm_s390_real_to_abs(vcpu, start);
+		else
+			abs_addr = start;
+		useraddr = gfn_to_hva(vcpu->kvm, gpa_to_gfn(abs_addr));
+		if (kvm_is_error_hva(useraddr))
 			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
 
 		if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) {
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [GIT PULL 4/6] KVM: s390: Add low-address protection to TEST BLOCK
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
                   ` (2 preceding siblings ...)
  2014-04-29 13:36 ` [GIT PULL 3/6] KVM: s390: Fixes for PFMF Christian Borntraeger
@ 2014-04-29 13:36 ` Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 5/6] KVM: s390: introduce kvm_s390_vcpu_{start,stop} Christian Borntraeger
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, Thomas Huth,
	Christian Borntraeger

From: Thomas Huth <thuth@linux.vnet.ibm.com>

TEST BLOCK is also subject to the low-address protection, so we need
to check the destination address in our handler.

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/priv.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index a47157b..07d0c10 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -206,6 +206,9 @@ static int handle_test_block(struct kvm_vcpu *vcpu)
 
 	kvm_s390_get_regs_rre(vcpu, NULL, &reg2);
 	addr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
+	addr = kvm_s390_logical_to_effective(vcpu, addr);
+	if (kvm_s390_check_low_addr_protection(vcpu, addr))
+		return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
 	addr = kvm_s390_real_to_abs(vcpu, addr);
 
 	if (kvm_is_error_gpa(vcpu->kvm, addr))
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [GIT PULL 5/6] KVM: s390: introduce kvm_s390_vcpu_{start,stop}
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
                   ` (3 preceding siblings ...)
  2014-04-29 13:36 ` [GIT PULL 4/6] KVM: s390: Add low-address protection to TEST BLOCK Christian Borntraeger
@ 2014-04-29 13:36 ` Christian Borntraeger
  2014-04-29 13:36 ` [GIT PULL 6/6] KVM: s390: enable IBS for single running VCPUs Christian Borntraeger
  2014-04-30 10:30 ` [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Paolo Bonzini
  6 siblings, 0 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, David Hildenbrand,
	Christian Borntraeger

From: David Hildenbrand <dahi@linux.vnet.ibm.com>

This patch introduces two new functions to set/clear the CPUSTAT_STOPPED bit and
makes use of it at all applicable places. These functions prepare the additional
execution of code when starting/stopping a vcpu.

The CPUSTAT_STOPPED bit should not be touched outside of these functions.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/diag.c       |  2 +-
 arch/s390/kvm/intercept.c  |  3 +--
 arch/s390/kvm/interrupt.c  |  2 +-
 arch/s390/kvm/kvm-s390.c   | 16 ++++++++++++++--
 arch/s390/kvm/kvm-s390.h   |  2 ++
 arch/s390/kvm/trace-s390.h | 21 +++++++++++++++++++++
 6 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
index 5521ace..004d385 100644
--- a/arch/s390/kvm/diag.c
+++ b/arch/s390/kvm/diag.c
@@ -176,7 +176,7 @@ static int __diag_ipl_functions(struct kvm_vcpu *vcpu)
 		return -EOPNOTSUPP;
 	}
 
-	atomic_set_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	kvm_s390_vcpu_stop(vcpu);
 	vcpu->run->s390_reset_flags |= KVM_S390_RESET_SUBSYSTEM;
 	vcpu->run->s390_reset_flags |= KVM_S390_RESET_IPL;
 	vcpu->run->s390_reset_flags |= KVM_S390_RESET_CPU_INIT;
diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index 54313fe..99e4b76 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -65,8 +65,7 @@ static int handle_stop(struct kvm_vcpu *vcpu)
 	trace_kvm_s390_stop_request(vcpu->arch.local_int.action_bits);
 
 	if (vcpu->arch.local_int.action_bits & ACTION_STOP_ON_STOP) {
-		atomic_set_mask(CPUSTAT_STOPPED,
-				&vcpu->arch.sie_block->cpuflags);
+		kvm_s390_vcpu_stop(vcpu);
 		vcpu->arch.local_int.action_bits &= ~ACTION_STOP_ON_STOP;
 		VCPU_EVENT(vcpu, 3, "%s", "cpu stopped");
 		rc = -EOPNOTSUPP;
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 077e473..d9526bb 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -413,7 +413,7 @@ static void __do_deliver_interrupt(struct kvm_vcpu *vcpu,
 		rc |= read_guest_lc(vcpu, offsetof(struct _lowcore, restart_psw),
 				    &vcpu->arch.sie_block->gpsw,
 				    sizeof(psw_t));
-		atomic_clear_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+		kvm_s390_vcpu_start(vcpu);
 		break;
 	case KVM_S390_PROGRAM_INT:
 		VCPU_EVENT(vcpu, 4, "interrupt: pgm check code:%x, ilc:%x",
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index b32c42c..6c972d2 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -592,7 +592,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
 	vcpu->arch.sie_block->pp = 0;
 	vcpu->arch.pfault_token = KVM_S390_PFAULT_TOKEN_INVALID;
 	kvm_clear_async_pf_completion_queue(vcpu);
-	atomic_set_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	kvm_s390_vcpu_stop(vcpu);
 	kvm_s390_clear_local_irqs(vcpu);
 }
 
@@ -1235,7 +1235,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	if (vcpu->sigset_active)
 		sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
 
-	atomic_clear_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	kvm_s390_vcpu_start(vcpu);
 
 	switch (kvm_run->exit_reason) {
 	case KVM_EXIT_S390_SIEIC:
@@ -1362,6 +1362,18 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
 	return kvm_s390_store_status_unloaded(vcpu, addr);
 }
 
+void kvm_s390_vcpu_start(struct kvm_vcpu *vcpu)
+{
+	trace_kvm_s390_vcpu_start_stop(vcpu->vcpu_id, 1);
+	atomic_clear_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+}
+
+void kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu)
+{
+	trace_kvm_s390_vcpu_start_stop(vcpu->vcpu_id, 0);
+	atomic_set_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+}
+
 static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 				     struct kvm_enable_cap *cap)
 {
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 9b5680d..c28423a 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -157,6 +157,8 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
 /* implemented in kvm-s390.c */
 int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
 int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_s390_vcpu_start(struct kvm_vcpu *vcpu);
+void kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu);
 void s390_vcpu_block(struct kvm_vcpu *vcpu);
 void s390_vcpu_unblock(struct kvm_vcpu *vcpu);
 void exit_sie(struct kvm_vcpu *vcpu);
diff --git a/arch/s390/kvm/trace-s390.h b/arch/s390/kvm/trace-s390.h
index 13f30f5..34d4f8a 100644
--- a/arch/s390/kvm/trace-s390.h
+++ b/arch/s390/kvm/trace-s390.h
@@ -68,6 +68,27 @@ TRACE_EVENT(kvm_s390_destroy_vcpu,
 	);
 
 /*
+ * Trace point for start and stop of vpcus.
+ */
+TRACE_EVENT(kvm_s390_vcpu_start_stop,
+	    TP_PROTO(unsigned int id, int state),
+	    TP_ARGS(id, state),
+
+	    TP_STRUCT__entry(
+		    __field(unsigned int, id)
+		    __field(int, state)
+		    ),
+
+	    TP_fast_assign(
+		    __entry->id = id;
+		    __entry->state = state;
+		    ),
+
+	    TP_printk("%s cpu %d", __entry->state ? "starting" : "stopping",
+		      __entry->id)
+	);
+
+/*
  * Trace points for injection of interrupts, either per machine or
  * per vcpu.
  */
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [GIT PULL 6/6] KVM: s390: enable IBS for single running VCPUs
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
                   ` (4 preceding siblings ...)
  2014-04-29 13:36 ` [GIT PULL 5/6] KVM: s390: introduce kvm_s390_vcpu_{start,stop} Christian Borntraeger
@ 2014-04-29 13:36 ` Christian Borntraeger
  2014-04-30 10:30 ` [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Paolo Bonzini
  6 siblings, 0 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-29 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck, David Hildenbrand,
	Christian Borntraeger

From: David Hildenbrand <dahi@linux.vnet.ibm.com>

This patch enables the IBS facility when a single VCPU is running.
The facility is dynamically turned on/off as soon as other VCPUs
enter/leave the stopped state.

When this facility is operating, some instructions can be executed
faster for single-cpu guests.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/include/asm/kvm_host.h |   2 +
 arch/s390/kvm/kvm-s390.c         | 123 ++++++++++++++++++++++++++++++++++++++-
 arch/s390/kvm/trace-s390.h       |  22 +++++++
 include/linux/kvm_host.h         |   2 +
 4 files changed, 147 insertions(+), 2 deletions(-)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 0d45f6f..f0a1dc5 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -72,6 +72,7 @@ struct sca_block {
 #define CPUSTAT_ZARCH      0x00000800
 #define CPUSTAT_MCDS       0x00000100
 #define CPUSTAT_SM         0x00000080
+#define CPUSTAT_IBS        0x00000040
 #define CPUSTAT_G          0x00000008
 #define CPUSTAT_GED        0x00000004
 #define CPUSTAT_J          0x00000002
@@ -411,6 +412,7 @@ struct kvm_arch{
 	int use_cmma;
 	struct s390_io_adapter *adapters[MAX_S390_IO_ADAPTERS];
 	wait_queue_head_t ipte_wq;
+	spinlock_t start_stop_lock;
 };
 
 #define KVM_HVA_ERR_BAD		(-1UL)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 6c972d2..0a01744 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -458,6 +458,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm->arch.css_support = 0;
 	kvm->arch.use_irqchip = 0;
 
+	spin_lock_init(&kvm->arch.start_stop_lock);
+
 	return 0;
 out_nogmap:
 	debug_unregister(kvm->arch.dbf);
@@ -996,8 +998,15 @@ bool kvm_s390_cmma_enabled(struct kvm *kvm)
 	return true;
 }
 
+static bool ibs_enabled(struct kvm_vcpu *vcpu)
+{
+	return atomic_read(&vcpu->arch.sie_block->cpuflags) & CPUSTAT_IBS;
+}
+
 static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
 {
+retry:
+	s390_vcpu_unblock(vcpu);
 	/*
 	 * We use MMU_RELOAD just to re-arm the ipte notifier for the
 	 * guest prefix page. gmap_ipte_notify will wait on the ptl lock.
@@ -1005,15 +1014,34 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
 	 * already finished. We might race against a second unmapper that
 	 * wants to set the blocking bit. Lets just retry the request loop.
 	 */
-	while (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) {
+	if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) {
 		int rc;
 		rc = gmap_ipte_notify(vcpu->arch.gmap,
 				      vcpu->arch.sie_block->prefix,
 				      PAGE_SIZE * 2);
 		if (rc)
 			return rc;
-		s390_vcpu_unblock(vcpu);
+		goto retry;
+	}
+
+	if (kvm_check_request(KVM_REQ_ENABLE_IBS, vcpu)) {
+		if (!ibs_enabled(vcpu)) {
+			trace_kvm_s390_enable_disable_ibs(vcpu->vcpu_id, 1);
+			atomic_set_mask(CPUSTAT_IBS,
+					&vcpu->arch.sie_block->cpuflags);
+		}
+		goto retry;
 	}
+
+	if (kvm_check_request(KVM_REQ_DISABLE_IBS, vcpu)) {
+		if (ibs_enabled(vcpu)) {
+			trace_kvm_s390_enable_disable_ibs(vcpu->vcpu_id, 0);
+			atomic_clear_mask(CPUSTAT_IBS,
+					  &vcpu->arch.sie_block->cpuflags);
+		}
+		goto retry;
+	}
+
 	return 0;
 }
 
@@ -1362,16 +1390,107 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
 	return kvm_s390_store_status_unloaded(vcpu, addr);
 }
 
+static inline int is_vcpu_stopped(struct kvm_vcpu *vcpu)
+{
+	return atomic_read(&(vcpu)->arch.sie_block->cpuflags) & CPUSTAT_STOPPED;
+}
+
+static void __disable_ibs_on_vcpu(struct kvm_vcpu *vcpu)
+{
+	kvm_check_request(KVM_REQ_ENABLE_IBS, vcpu);
+	kvm_make_request(KVM_REQ_DISABLE_IBS, vcpu);
+	exit_sie_sync(vcpu);
+}
+
+static void __disable_ibs_on_all_vcpus(struct kvm *kvm)
+{
+	unsigned int i;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		__disable_ibs_on_vcpu(vcpu);
+	}
+}
+
+static void __enable_ibs_on_vcpu(struct kvm_vcpu *vcpu)
+{
+	kvm_check_request(KVM_REQ_DISABLE_IBS, vcpu);
+	kvm_make_request(KVM_REQ_ENABLE_IBS, vcpu);
+	exit_sie_sync(vcpu);
+}
+
 void kvm_s390_vcpu_start(struct kvm_vcpu *vcpu)
 {
+	int i, online_vcpus, started_vcpus = 0;
+
+	if (!is_vcpu_stopped(vcpu))
+		return;
+
 	trace_kvm_s390_vcpu_start_stop(vcpu->vcpu_id, 1);
+	/* Only one cpu at a time may enter/leave the STOPPED state. */
+	spin_lock_bh(&vcpu->kvm->arch.start_stop_lock);
+	online_vcpus = atomic_read(&vcpu->kvm->online_vcpus);
+
+	for (i = 0; i < online_vcpus; i++) {
+		if (!is_vcpu_stopped(vcpu->kvm->vcpus[i]))
+			started_vcpus++;
+	}
+
+	if (started_vcpus == 0) {
+		/* we're the only active VCPU -> speed it up */
+		__enable_ibs_on_vcpu(vcpu);
+	} else if (started_vcpus == 1) {
+		/*
+		 * As we are starting a second VCPU, we have to disable
+		 * the IBS facility on all VCPUs to remove potentially
+		 * oustanding ENABLE requests.
+		 */
+		__disable_ibs_on_all_vcpus(vcpu->kvm);
+	}
+
 	atomic_clear_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	/*
+	 * Another VCPU might have used IBS while we were offline.
+	 * Let's play safe and flush the VCPU at startup.
+	 */
+	vcpu->arch.sie_block->ihcpu  = 0xffff;
+	spin_unlock_bh(&vcpu->kvm->arch.start_stop_lock);
+	return;
 }
 
 void kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu)
 {
+	int i, online_vcpus, started_vcpus = 0;
+	struct kvm_vcpu *started_vcpu = NULL;
+
+	if (is_vcpu_stopped(vcpu))
+		return;
+
 	trace_kvm_s390_vcpu_start_stop(vcpu->vcpu_id, 0);
+	/* Only one cpu at a time may enter/leave the STOPPED state. */
+	spin_lock_bh(&vcpu->kvm->arch.start_stop_lock);
+	online_vcpus = atomic_read(&vcpu->kvm->online_vcpus);
+
 	atomic_set_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	__disable_ibs_on_vcpu(vcpu);
+
+	for (i = 0; i < online_vcpus; i++) {
+		if (!is_vcpu_stopped(vcpu->kvm->vcpus[i])) {
+			started_vcpus++;
+			started_vcpu = vcpu->kvm->vcpus[i];
+		}
+	}
+
+	if (started_vcpus == 1) {
+		/*
+		 * As we only have one VCPU left, we want to enable the
+		 * IBS facility for that VCPU to speed it up.
+		 */
+		__enable_ibs_on_vcpu(started_vcpu);
+	}
+
+	spin_unlock_bh(&vcpu->kvm->arch.start_stop_lock);
+	return;
 }
 
 static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
diff --git a/arch/s390/kvm/trace-s390.h b/arch/s390/kvm/trace-s390.h
index 34d4f8a..647e9d6 100644
--- a/arch/s390/kvm/trace-s390.h
+++ b/arch/s390/kvm/trace-s390.h
@@ -244,6 +244,28 @@ TRACE_EVENT(kvm_s390_enable_css,
 		      __entry->kvm)
 	);
 
+/*
+ * Trace point for enabling and disabling interlocking-and-broadcasting
+ * suppression.
+ */
+TRACE_EVENT(kvm_s390_enable_disable_ibs,
+	    TP_PROTO(unsigned int id, int state),
+	    TP_ARGS(id, state),
+
+	    TP_STRUCT__entry(
+		    __field(unsigned int, id)
+		    __field(int, state)
+		    ),
+
+	    TP_fast_assign(
+		    __entry->id = id;
+		    __entry->state = state;
+		    ),
+
+	    TP_printk("%s ibs on cpu %d",
+		      __entry->state ? "enabling" : "disabling", __entry->id)
+	);
+
 
 #endif /* _TRACE_KVMS390_H */
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 820fc2e..1e125b0 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -134,6 +134,8 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_EPR_EXIT          20
 #define KVM_REQ_SCAN_IOAPIC       21
 #define KVM_REQ_GLOBAL_CLOCK_UPDATE 22
+#define KVM_REQ_ENABLE_IBS        23
+#define KVM_REQ_DISABLE_IBS       24
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID		0
 #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
-- 
1.8.4.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception
  2014-04-29 13:36 ` [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception Christian Borntraeger
@ 2014-04-30  8:07   ` Heiko Carstens
  2014-04-30  8:53     ` Thomas Huth
  2014-04-30 10:07     ` Christian Borntraeger
  0 siblings, 2 replies; 11+ messages in thread
From: Heiko Carstens @ 2014-04-30  8:07 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Paolo Bonzini, Gleb Natapov, KVM, linux-s390, Cornelia Huck, Thomas Huth

On Tue, Apr 29, 2014 at 03:36:43PM +0200, Christian Borntraeger wrote:
> +static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long hostaddr, srcaddr, dstaddr;
> +	psw_t *psw = &vcpu->arch.sie_block->gpsw;
> +	struct mm_struct *mm = current->mm;
> +	int reg1, reg2, rc;
> +
> +	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
> +	srcaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg2]);
> +	dstaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg1]);
> +
> +	/* Make sure that the source is paged-in */
> +	hostaddr = gmap_fault(srcaddr, vcpu->arch.gmap);
> +	if (IS_ERR_VALUE(hostaddr))
> +		return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);

FWIW (and nothing that should keep this code from going upstream),
this is not entirely correct, since gmap_fault() may return -ENOMEM.
So a host out-of-memory situation will incorrectly result in a guest
addressing exception, which is most likely not what we want.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception
  2014-04-30  8:07   ` Heiko Carstens
@ 2014-04-30  8:53     ` Thomas Huth
  2014-04-30 10:07     ` Christian Borntraeger
  1 sibling, 0 replies; 11+ messages in thread
From: Thomas Huth @ 2014-04-30  8:53 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Christian Borntraeger, Paolo Bonzini, Gleb Natapov, KVM,
	linux-s390, Cornelia Huck

On Wed, 30 Apr 2014 10:07:09 +0200
Heiko Carstens <heiko.carstens@de.ibm.com> wrote:

> On Tue, Apr 29, 2014 at 03:36:43PM +0200, Christian Borntraeger wrote:
> > +static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
> > +{
> > +	unsigned long hostaddr, srcaddr, dstaddr;
> > +	psw_t *psw = &vcpu->arch.sie_block->gpsw;
> > +	struct mm_struct *mm = current->mm;
> > +	int reg1, reg2, rc;
> > +
> > +	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
> > +	srcaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg2]);
> > +	dstaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg1]);
> > +
> > +	/* Make sure that the source is paged-in */
> > +	hostaddr = gmap_fault(srcaddr, vcpu->arch.gmap);
> > +	if (IS_ERR_VALUE(hostaddr))
> > +		return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
> 
> FWIW (and nothing that should keep this code from going upstream),
> this is not entirely correct, since gmap_fault() may return -ENOMEM.
> So a host out-of-memory situation will incorrectly result in a guest
> addressing exception, which is most likely not what we want.

Ah, ... good point, thanks for the hint! (BTW: That's why I personally
prefer some more comments in the source code - by just looking at
gmap_fault() and __gmap_fault(), this is quite hard to see unless you
step through these functions and called functions line by line).

Anyway, I'll assemble a follow-up patch that addresses this
problem with handle_mvpg_pei().

 Thomas

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception
  2014-04-30  8:07   ` Heiko Carstens
  2014-04-30  8:53     ` Thomas Huth
@ 2014-04-30 10:07     ` Christian Borntraeger
  1 sibling, 0 replies; 11+ messages in thread
From: Christian Borntraeger @ 2014-04-30 10:07 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Paolo Bonzini, Gleb Natapov, KVM, linux-s390, Cornelia Huck, Thomas Huth

On 30/04/14 10:07, Heiko Carstens wrote:
> On Tue, Apr 29, 2014 at 03:36:43PM +0200, Christian Borntraeger wrote:
>> +static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
>> +{
>> +	unsigned long hostaddr, srcaddr, dstaddr;
>> +	psw_t *psw = &vcpu->arch.sie_block->gpsw;
>> +	struct mm_struct *mm = current->mm;
>> +	int reg1, reg2, rc;
>> +
>> +	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
>> +	srcaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg2]);
>> +	dstaddr = kvm_s390_real_to_abs(vcpu, vcpu->run->s.regs.gprs[reg1]);
>> +
>> +	/* Make sure that the source is paged-in */
>> +	hostaddr = gmap_fault(srcaddr, vcpu->arch.gmap);
>> +	if (IS_ERR_VALUE(hostaddr))
>> +		return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
> 
> FWIW (and nothing that should keep this code from going upstream),
> this is not entirely correct, since gmap_fault() may return -ENOMEM.
> So a host out-of-memory situation will incorrectly result in a guest
> addressing exception, which is most likely not what we want.
> 
Indeed, host out-of-memory situation will cause architectural non-compliance in some areas of KVM/s390.
The proper Linux way (returning -ENOMEM  (or EFAULT?) in the KVM_RUN ioctl) will cause qemu to print an error and abort(), which is also not ideal.
The s390 way, should probably be to inject an uncorrectable storage error machine check.

So the question is what is the right thing to  do for these cases?

Paolo, what is the x86 way of dealing with situations like this (here we fail to allocate a pud,pmd,pte or helper structure). Looks like you return
-ENOMEM to qemu. Is that true for all cases?

Christian

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup
  2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
                   ` (5 preceding siblings ...)
  2014-04-29 13:36 ` [GIT PULL 6/6] KVM: s390: enable IBS for single running VCPUs Christian Borntraeger
@ 2014-04-30 10:30 ` Paolo Bonzini
  6 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2014-04-30 10:30 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Gleb Natapov, KVM, linux-s390, Cornelia Huck

Il 29/04/2014 15:36, Christian Borntraeger ha scritto:
>   git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git  tags/kvm-s390-20140429

Pulled.  I still plan to go through the 50+ patch pull request and do at 
least some kind of API review.

kvm/queue will be updated today hopefully.

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-04-30 10:30 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-29 13:36 [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Christian Borntraeger
2014-04-29 13:36 ` [GIT PULL 1/6] KVM: s390: Handle MVPG partial execution interception Christian Borntraeger
2014-04-30  8:07   ` Heiko Carstens
2014-04-30  8:53     ` Thomas Huth
2014-04-30 10:07     ` Christian Borntraeger
2014-04-29 13:36 ` [GIT PULL 2/6] KVM: s390: Add a function for checking the low-address protection Christian Borntraeger
2014-04-29 13:36 ` [GIT PULL 3/6] KVM: s390: Fixes for PFMF Christian Borntraeger
2014-04-29 13:36 ` [GIT PULL 4/6] KVM: s390: Add low-address protection to TEST BLOCK Christian Borntraeger
2014-04-29 13:36 ` [GIT PULL 5/6] KVM: s390: introduce kvm_s390_vcpu_{start,stop} Christian Borntraeger
2014-04-29 13:36 ` [GIT PULL 6/6] KVM: s390: enable IBS for single running VCPUs Christian Borntraeger
2014-04-30 10:30 ` [GIT PULL 0/6] KVM: s390: Fixes and single VCPU speedup Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.