All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
@ 2017-05-29 16:32 David Hildenbrand
  2017-05-29 16:32 ` [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k) David Hildenbrand
                   ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: David Hildenbrand @ 2017-05-29 16:32 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth,
	david, Christian Borntraeger

Having to enable vm.alloc_pgste globally might not be the best solution.
4k page tables are created for all processes and running QEMU KVM guests
is more complicated than it should be.

Unfortunately, converting all page tables to 4k pgste page tables is
not possible without provoking various race conditions. However, we
might be able to let 2k and 4k page tables co-exist. We only need
4k page tables whenever we want to expose such memory to a guest. So
turning on 4k page table allocation at one point and only allowing such
memory to go into our gmap (guest mapping) might be a solution.

User space tools like QEMU that create the VM before mmap-ing any memory
that will belong to the guest can simply use the new VM type. Proper 4k
page tables will be created for any memory mmap-ed afterwards. And these
can be used in the gmap without problems. Existing user space tools
will work as before - having to enable vm.alloc_pgste explicitly.

This should play fine with vSIE, as vSIE code works completely on the gmap.
So if only page tables with pgste go into our gmap, we should be fine.

Not sure if this breaks important concepts, has some serious performance
problems or I am missing important cases. If so, I guess there is really
no way to avoid setting vm.alloc_pgste.

Possible modifications:
- Enable this option via an ioctl (like KVM_S390_ENABLE_SIE) instead of
  a new VM type
- Remember if we have mixed pgtables. If !mixed, we can make maybe faster
  decisions (if that is really a problem).

Only very quickly tested under KVM on z/VM. Will do some more testing if
this idea is worth working on.
Based on linux-next.git/master


David Hildenbrand (2):
  s390x: mm: allow mixed page table types (2k and 4k)
  KVM: s390: Introduce KVM_VM_S390_LATE_MMAP

 Documentation/virtual/kvm/api.txt | 13 +++++++++
 arch/s390/include/asm/pgtable.h   | 15 ++++++++--
 arch/s390/kvm/kvm-s390.c          | 10 +++++--
 arch/s390/mm/gmap.c               | 11 ++++++--
 arch/s390/mm/pgalloc.c            | 14 ++++++----
 arch/s390/mm/pgtable.c            | 59 +++++++++++++++++++++++++++++++++------
 include/uapi/linux/kvm.h          |  2 ++
 7 files changed, 101 insertions(+), 23 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k)
  2017-05-29 16:32 [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste David Hildenbrand
@ 2017-05-29 16:32 ` David Hildenbrand
  2017-06-01 11:39   ` Christian Borntraeger
  2017-06-01 12:59   ` David Hildenbrand
  2017-05-29 16:32 ` [PATCH RFC 2/2] KVM: s390: Introduce KVM_VM_S390_LATE_MMAP David Hildenbrand
  2017-06-01 10:46 ` [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste Martin Schwidefsky
  2 siblings, 2 replies; 30+ messages in thread
From: David Hildenbrand @ 2017-05-29 16:32 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth,
	david, Christian Borntraeger

For now, one has to globally enable vm.alloc_pgste in order to run
KVM guests. This results in any process getting 4k page tables instead of
only 2k page tables.

As the decision about which page table type to use has to be made at fork
time, there isn't much we can do to avoid this global system setting.
However, by allowing mixed page table types for one process, we can
simply turn on allocation of 4k page tables with pgstes at one point
and rely on the caller to make sure that everything getting mapped into
the gmap will be based on 4k page tables allocated/mmaped after the
switch to 4k page table allocation. If we detect some incompatible
page table, we will handle this as an ordinary fault, indicating -EFAULT.

Background: QEMU will in general create the KVM VM before mmap-ing memory
that will get part of the guest address space. Every mmap will create
new page tables. To not break some legacy/other user space doing
this in a different order, this will have to be enabled explicitly by
the caller. So the vm.alloc_pgste option has to stay.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/s390/include/asm/pgtable.h | 15 +++++++++--
 arch/s390/kvm/kvm-s390.c        |  4 +--
 arch/s390/mm/gmap.c             | 11 +++++---
 arch/s390/mm/pgalloc.c          | 14 +++++-----
 arch/s390/mm/pgtable.c          | 59 ++++++++++++++++++++++++++++++++++-------
 5 files changed, 81 insertions(+), 22 deletions(-)

diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 3effb26..0fb6d29 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1089,6 +1089,17 @@ int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep);
 int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
 			unsigned long *oldpte, unsigned long *oldpgste);
 
+static inline int pgtable_has_pgste(struct mm_struct *mm, unsigned long addr)
+{
+	struct page *page;
+
+	if (!mm_has_pgste(mm))
+		return 0;
+
+	page = pfn_to_page(addr >> PAGE_SHIFT);
+	return atomic_read(&page->_mapcount) & 0x4U;
+}
+
 /*
  * Certain architectures need to do special things when PTEs
  * within a page table are directly modified.  Thus, the following
@@ -1101,7 +1112,7 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 		pte_val(entry) &= ~_PAGE_NOEXEC;
 	if (pte_present(entry))
 		pte_val(entry) &= ~_PAGE_UNUSED;
-	if (mm_has_pgste(mm))
+	if (pgtable_has_pgste(mm, __pa(ptep)))
 		ptep_set_pte_at(mm, addr, ptep, entry);
 	else
 		*ptep = entry;
@@ -1541,7 +1552,7 @@ static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset)
 
 extern int vmem_add_mapping(unsigned long start, unsigned long size);
 extern int vmem_remove_mapping(unsigned long start, unsigned long size);
-extern int s390_enable_sie(void);
+extern int s390_enable_sie(bool mixed_pgtables);
 extern int s390_enable_skey(void);
 extern void s390_reset_cmma(struct mm_struct *mm);
 
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 4ef3035..89684bb 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -354,7 +354,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg)
 {
 	if (ioctl == KVM_S390_ENABLE_SIE)
-		return s390_enable_sie();
+		return s390_enable_sie(false);
 	return -EINVAL;
 }
 
@@ -1808,7 +1808,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 		goto out_err;
 #endif
 
-	rc = s390_enable_sie();
+	rc = s390_enable_sie(false);
 	if (rc)
 		goto out_err;
 
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 4fb3d3c..ff24ada 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -586,6 +586,9 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
 	/* large pmds cannot yet be handled */
 	if (pmd_large(*pmd))
 		return -EFAULT;
+	/* only page tables with pgstes can be linked into a gmap */
+	if (!pgtable_has_pgste(mm, pmd_pfn(*pmd) << PAGE_SHIFT))
+		return -EFAULT;
 	/* Link gmap segment table entry location to page table. */
 	rc = radix_tree_preload(GFP_KERNEL);
 	if (rc)
@@ -2123,17 +2126,19 @@ static inline void thp_split_mm(struct mm_struct *mm)
 /*
  * switch on pgstes for its userspace process (for kvm)
  */
-int s390_enable_sie(void)
+int s390_enable_sie(bool mixed_pgtables)
 {
 	struct mm_struct *mm = current->mm;
 
 	/* Do we have pgstes? if yes, we are done */
 	if (mm_has_pgste(mm))
 		return 0;
-	/* Fail if the page tables are 2K */
-	if (!mm_alloc_pgste(mm))
+	/* Fail if the page tables are 2K and mixed pgtables are disabled */
+	if (!mixed_pgtables && !mm_alloc_pgste(mm))
 		return -EINVAL;
 	down_write(&mm->mmap_sem);
+	/* allocate page tables with pgste from now on if not already done */
+	mm->context.alloc_pgste = 1;
 	mm->context.has_pgste = 1;
 	/* split thp mappings and disable thp for future mappings */
 	thp_split_mm(mm);
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 18918e3..2790473 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -218,7 +218,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 	table = (unsigned long *) page_to_phys(page);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_set(&page->_mapcount, 3);
+		atomic_set(&page->_mapcount, 4);
 		clear_table(table, _PAGE_INVALID, PAGE_SIZE/2);
 		clear_table(table + PTRS_PER_PTE, 0, PAGE_SIZE/2);
 	} else {
@@ -238,7 +238,7 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 	unsigned int bit, mask;
 
 	page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
-	if (!mm_alloc_pgste(mm)) {
+	if (!pgtable_has_pgste(mm, __pa(table))) {
 		/* Free 2K page table fragment of a 4K page */
 		bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
 		spin_lock_bh(&mm->context.pgtable_lock);
@@ -266,9 +266,9 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 
 	mm = tlb->mm;
 	page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
-	if (mm_alloc_pgste(mm)) {
+	if (pgtable_has_pgste(mm, __pa(table))) {
 		gmap_unlink(mm, table, vmaddr);
-		table = (unsigned long *) (__pa(table) | 3);
+		table = (unsigned long *) (__pa(table) | 4);
 		tlb_remove_table(tlb, table);
 		return;
 	}
@@ -286,7 +286,7 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 
 static void __tlb_remove_table(void *_table)
 {
-	unsigned int mask = (unsigned long) _table & 3;
+	unsigned int mask = (unsigned long) _table & 7;
 	void *table = (void *)((unsigned long) _table ^ mask);
 	struct page *page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
 
@@ -299,11 +299,13 @@ static void __tlb_remove_table(void *_table)
 		if (atomic_xor_bits(&page->_mapcount, mask << 4) != 0)
 			break;
 		/* fallthrough */
-	case 3:		/* 4K page table with pgstes */
+	case 4:		/* 4K page table with pgstes */
 		pgtable_page_dtor(page);
 		atomic_set(&page->_mapcount, -1);
 		__free_page(page);
 		break;
+	default:
+		WARN_ONCE(true, "Unknown table type: %x", mask);
 	}
 }
 
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index d4d409b..b22c2b6 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -196,7 +196,7 @@ static inline pgste_t ptep_xchg_start(struct mm_struct *mm,
 {
 	pgste_t pgste = __pgste(0);
 
-	if (mm_has_pgste(mm)) {
+	if (pgtable_has_pgste(mm, __pa(ptep))) {
 		pgste = pgste_get_lock(ptep);
 		pgste = pgste_pte_notify(mm, addr, ptep, pgste);
 	}
@@ -207,7 +207,7 @@ static inline pte_t ptep_xchg_commit(struct mm_struct *mm,
 				    unsigned long addr, pte_t *ptep,
 				    pgste_t pgste, pte_t old, pte_t new)
 {
-	if (mm_has_pgste(mm)) {
+	if (pgtable_has_pgste(mm, __pa(ptep))) {
 		if (pte_val(old) & _PAGE_INVALID)
 			pgste_set_key(ptep, pgste, new, mm);
 		if (pte_val(new) & _PAGE_INVALID) {
@@ -263,7 +263,7 @@ pte_t ptep_modify_prot_start(struct mm_struct *mm, unsigned long addr,
 	preempt_disable();
 	pgste = ptep_xchg_start(mm, addr, ptep);
 	old = ptep_flush_lazy(mm, addr, ptep);
-	if (mm_has_pgste(mm)) {
+	if (pgtable_has_pgste(mm, __pa(ptep))) {
 		pgste = pgste_update_all(old, pgste, mm);
 		pgste_set(ptep, pgste);
 	}
@@ -278,7 +278,7 @@ void ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
 
 	if (!MACHINE_HAS_NX)
 		pte_val(pte) &= ~_PAGE_NOEXEC;
-	if (mm_has_pgste(mm)) {
+	if (pgtable_has_pgste(mm, __pa(ptep))) {
 		pgste = pgste_get(ptep);
 		pgste_set_key(ptep, pgste, pte, mm);
 		pgste = pgste_set_pte(ptep, pgste, pte);
@@ -445,7 +445,7 @@ void ptep_set_pte_at(struct mm_struct *mm, unsigned long addr,
 {
 	pgste_t pgste;
 
-	/* the mm_has_pgste() check is done in set_pte_at() */
+	/* the pgtable_has_pgste() check is done in set_pte_at() */
 	preempt_disable();
 	pgste = pgste_get_lock(ptep);
 	pgste_val(pgste) &= ~_PGSTE_GPS_ZERO;
@@ -569,6 +569,9 @@ void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
 	pgste_t pgste;
 	pte_t pte;
 
+	if (!pgtable_has_pgste(mm, __pa(ptep)))
+		return;
+
 	/* Zap unused and logically-zero pages */
 	preempt_disable();
 	pgste = pgste_get_lock(ptep);
@@ -588,18 +591,22 @@ void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
 
 void ptep_zap_key(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
 {
+	const bool has_pgste = pgtable_has_pgste(mm, __pa(ptep));
 	unsigned long ptev;
 	pgste_t pgste;
 
 	/* Clear storage key */
 	preempt_disable();
-	pgste = pgste_get_lock(ptep);
-	pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT |
-			      PGSTE_GR_BIT | PGSTE_GC_BIT);
+	if (has_pgste) {
+		pgste = pgste_get_lock(ptep);
+		pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT |
+				      PGSTE_GR_BIT | PGSTE_GC_BIT);
+	}
 	ptev = pte_val(*ptep);
 	if (!(ptev & _PAGE_INVALID) && (ptev & _PAGE_WRITE))
 		page_set_storage_key(ptev & PAGE_MASK, PAGE_DEFAULT_KEY, 1);
-	pgste_set_unlock(ptep, pgste);
+	if (has_pgste)
+		pgste_set_unlock(ptep, pgste);
 	preempt_enable();
 }
 
@@ -634,6 +641,10 @@ bool test_and_clear_guest_dirty(struct mm_struct *mm, unsigned long addr)
 	 */
 	if (pmd_large(*pmd))
 		return true;
+	if (!pgtable_has_pgste(mm, __pa(pmd))) {
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return false;
+	}
 
 	ptep = pte_alloc_map_lock(mm, pmd, addr, &ptl);
 	if (unlikely(!ptep))
@@ -670,6 +681,11 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
 	ptep = get_locked_pte(mm, addr, &ptl);
 	if (unlikely(!ptep))
 		return -EFAULT;
+	if (!pgtable_has_pgste(mm, __pa(ptep))) {
+		pte_unmap_unlock(ptep, ptl);
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return -EFAULT;
+	}
 
 	new = old = pgste_get_lock(ptep);
 	pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT |
@@ -748,6 +764,11 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr)
 	ptep = get_locked_pte(mm, addr, &ptl);
 	if (unlikely(!ptep))
 		return -EFAULT;
+	if (!pgtable_has_pgste(mm, __pa(ptep))) {
+		pte_unmap_unlock(ptep, ptl);
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return -EFAULT;
+	}
 
 	new = old = pgste_get_lock(ptep);
 	/* Reset guest reference bit only */
@@ -780,6 +801,11 @@ int get_guest_storage_key(struct mm_struct *mm, unsigned long addr,
 	ptep = get_locked_pte(mm, addr, &ptl);
 	if (unlikely(!ptep))
 		return -EFAULT;
+	if (!pgtable_has_pgste(mm, __pa(ptep))) {
+		pte_unmap_unlock(ptep, ptl);
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return -EFAULT;
+	}
 
 	pgste = pgste_get_lock(ptep);
 	*key = (pgste_val(pgste) & (PGSTE_ACC_BITS | PGSTE_FP_BIT)) >> 56;
@@ -820,6 +846,11 @@ int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
 	ptep = get_locked_pte(mm, hva, &ptl);
 	if (unlikely(!ptep))
 		return -EFAULT;
+	if (!pgtable_has_pgste(mm, __pa(ptep))) {
+		pte_unmap_unlock(ptep, ptl);
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return -EFAULT;
+	}
 	pgste = pgste_get_lock(ptep);
 	pgstev = pgste_val(pgste);
 	if (oldpte)
@@ -912,6 +943,11 @@ int set_pgste_bits(struct mm_struct *mm, unsigned long hva,
 	ptep = get_locked_pte(mm, hva, &ptl);
 	if (unlikely(!ptep))
 		return -EFAULT;
+	if (!pgtable_has_pgste(mm, __pa(ptep))) {
+		pte_unmap_unlock(ptep, ptl);
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return -EFAULT;
+	}
 	new = pgste_get_lock(ptep);
 
 	pgste_val(new) &= ~bits;
@@ -939,6 +975,11 @@ int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep)
 	ptep = get_locked_pte(mm, hva, &ptl);
 	if (unlikely(!ptep))
 		return -EFAULT;
+	if (!pgtable_has_pgste(mm, __pa(ptep))) {
+		pte_unmap_unlock(ptep, ptl);
+		WARN_ONCE(true, "Guest address on page table without pgste");
+		return -EFAULT;
+	}
 	*pgstep = pgste_val(pgste_get(ptep));
 	pte_unmap_unlock(ptep, ptl);
 	return 0;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 2/2] KVM: s390: Introduce KVM_VM_S390_LATE_MMAP
  2017-05-29 16:32 [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste David Hildenbrand
  2017-05-29 16:32 ` [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k) David Hildenbrand
@ 2017-05-29 16:32 ` David Hildenbrand
  2017-06-01 10:46 ` [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste Martin Schwidefsky
  2 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2017-05-29 16:32 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth,
	david, Christian Borntraeger

Let's introduce a new VM type that allows user space to run KVM guests
without having to enable vm.alloc_pgste for all user space processes.
As long as user space follows these simple rules with the new VM type,
everything should be fine:
- Use only mmap for memory to be used with user memory regions
- Do all such mmap calls after the VM was created.

If user space fails to obey these rules, KVM will report -EFAULT
whenever an address in that incompatible range is accessed.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 13 +++++++++++++
 arch/s390/kvm/kvm-s390.c          |  8 ++++++--
 include/uapi/linux/kvm.h          |  2 ++
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 912b7df..056f391 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -117,6 +117,16 @@ In order to create user controlled virtual machines on S390, check
 KVM_CAP_S390_UCONTROL and use the flag KVM_VM_S390_UCONTROL as
 privileged user (CAP_SYS_ADMIN).
 
+When creating a virtual machine with the flag KVM_VM_S390_LATE_MMAP
+on s390, only memory mmap'ed after VM creation is guaranteed to be usable
+in user memory regions, because special page tables that might be
+necessary for virtualization will only be created for memory mmaped
+after this call. This avoids having to enable system wide vm.alloc_pgste
+in order to create KVM guests. Whether this type of VM can be created is
+indicated by the capability "KVM_CAP_S390_LATE_MMAP". If memory used for
+user memory regions is mmap'ed before VM creation or !mmap'ed memory is used
+for user memory regions, don't use this VM type.
+
 To use hardware assisted virtualization on MIPS (VZ ASE) rather than
 the default trap & emulate implementation (which changes the virtual
 memory layout to fit in user mode), check KVM_CAP_MIPS_VZ and use the
@@ -985,6 +995,9 @@ the memory region are automatically reflected into the guest.  For example, an
 mmap() that affects the region will be made visible immediately.  Another
 example is madvise(MADV_DROP).
 
+See KVM_CREATE_VM for special handling related to KVM_VM_S390_LATE_MMAP
+on s390x.
+
 It is recommended to use this API instead of the KVM_SET_MEMORY_REGION ioctl.
 The KVM_SET_MEMORY_REGION does not allow fine grained control over memory
 allocation and is deprecated.
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 89684bb..75df5cd 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -389,6 +389,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_S390_USER_INSTR0:
 	case KVM_CAP_S390_CMMA_MIGRATION:
 	case KVM_CAP_S390_AIS:
+	case KVM_CAP_S390_LATE_MMAP:
 		r = 1;
 		break;
 	case KVM_CAP_S390_MEM_OP:
@@ -1796,6 +1797,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	int i, rc;
 	char debug_name[16];
 	static unsigned long sca_offset;
+	bool mixed_pgtables = false;
 
 	rc = -EINVAL;
 #ifdef CONFIG_KVM_S390_UCONTROL
@@ -1804,11 +1806,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	if ((type & KVM_VM_S390_UCONTROL) && (!capable(CAP_SYS_ADMIN)))
 		goto out_err;
 #else
-	if (type)
+	if (type & KVM_VM_S390_LATE_MMAP)
+		mixed_pgtables = true;
+	else if (type)
 		goto out_err;
 #endif
 
-	rc = s390_enable_sie(false);
+	rc = s390_enable_sie(mixed_pgtables);
 	if (rc)
 		goto out_err;
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2b8dc1c..436ba30 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -726,6 +726,7 @@ struct kvm_ppc_resize_hpt {
 
 /* machine type bits, to be used as argument to KVM_CREATE_VM */
 #define KVM_VM_S390_UCONTROL	1
+#define KVM_VM_S390_LATE_MMAP	2
 
 /* on ppc, 0 indicate default, 1 should force HV and 2 PR */
 #define KVM_VM_PPC_HV 1
@@ -925,6 +926,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_X86_GUEST_MWAIT 143
 #define KVM_CAP_ARM_USER_IRQ 144
 #define KVM_CAP_S390_CMMA_MIGRATION 145
+#define KVM_CAP_S390_LATE_MMAP 200
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-05-29 16:32 [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste David Hildenbrand
  2017-05-29 16:32 ` [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k) David Hildenbrand
  2017-05-29 16:32 ` [PATCH RFC 2/2] KVM: s390: Introduce KVM_VM_S390_LATE_MMAP David Hildenbrand
@ 2017-06-01 10:46 ` Martin Schwidefsky
  2017-06-01 11:24   ` Christian Borntraeger
                     ` (2 more replies)
  2 siblings, 3 replies; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-01 10:46 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kvm, linux-kernel, Heiko Carstens, Thomas Huth, Christian Borntraeger

Hi David,

it is nice to see that you are still working on s390 related topics.

On Mon, 29 May 2017 18:32:00 +0200
David Hildenbrand <david@redhat.com> wrote:

> Having to enable vm.alloc_pgste globally might not be the best solution.
> 4k page tables are created for all processes and running QEMU KVM guests
> is more complicated than it should be.

To run KVM guests you need to issue a single sysctl to set vm.allocate_pgste,
this is the best solution we found so far.

> Unfortunately, converting all page tables to 4k pgste page tables is
> not possible without provoking various race conditions.

That is one approach we tried and was found to be buggy. The point is that
you are not allowed to reallocate a page table while a VMA exists that is
in the address range of that page table.

Another approach we tried is to use an ELF flag on the qemu executable.
That does not work either because fs/exec.c allocates and populates the
new mm struct for the argument pages before fs/binfmt_elf.c comes into
play.

> However, we
> might be able to let 2k and 4k page tables co-exist. We only need
> 4k page tables whenever we want to expose such memory to a guest. So
> turning on 4k page table allocation at one point and only allowing such
> memory to go into our gmap (guest mapping) might be a solution.
> User space tools like QEMU that create the VM before mmap-ing any memory
> that will belong to the guest can simply use the new VM type. Proper 4k
> page tables will be created for any memory mmap-ed afterwards. And these
> can be used in the gmap without problems. Existing user space tools
> will work as before - having to enable vm.alloc_pgste explicitly.

I can not say that I like this approach. Right now a process either uses
2K page tables or 4K page tables. With your patch it is basically per page
table page. Memory areas that existed before the switch to allocate
4K page tables can not be mapped to the guests gmap anymore. There might
be hidden pitfalls e.g. with guest migration.

> This should play fine with vSIE, as vSIE code works completely on the gmap.
> So if only page tables with pgste go into our gmap, we should be fine.
> 
> Not sure if this breaks important concepts, has some serious performance
> problems or I am missing important cases. If so, I guess there is really
> no way to avoid setting vm.alloc_pgste.
> 
> Possible modifications:
> - Enable this option via an ioctl (like KVM_S390_ENABLE_SIE) instead of
>   a new VM type
> - Remember if we have mixed pgtables. If !mixed, we can make maybe faster
>   decisions (if that is really a problem).

What I do not like in particular is this function:

static inline int pgtable_has_pgste(struct mm_struct *mm, unsigned long addr)
{
	struct page *page;

	if (!mm_has_pgste(mm))
		return 0;

	page = pfn_to_page(addr >> PAGE_SHIFT);
	return atomic_read(&page->_mapcount) & 0x4U;
}

The check for pgstes got more complicated, it used to be a test-under-mask
of a bit in the mm struct and a branch. Now we have an additional pfn_to_page,
an atomic_read and a bit test. That is done multiple times for every ptep_xxx
operation. 

Is the operational simplification of not having to set vm.allocate_pgste really
that important ?

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-01 10:46 ` [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste Martin Schwidefsky
@ 2017-06-01 11:24   ` Christian Borntraeger
  2017-06-01 11:27   ` David Hildenbrand
  2017-06-02  7:02   ` Heiko Carstens
  2 siblings, 0 replies; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-01 11:24 UTC (permalink / raw)
  To: Martin Schwidefsky, David Hildenbrand
  Cc: kvm, linux-kernel, Heiko Carstens, Thomas Huth

On 06/01/2017 12:46 PM, Martin Schwidefsky wrote:
> Hi David,
> 
> it is nice to see that you are still working on s390 related topics.
> 
> On Mon, 29 May 2017 18:32:00 +0200
> David Hildenbrand <david@redhat.com> wrote:
> 
>> Having to enable vm.alloc_pgste globally might not be the best solution.
>> 4k page tables are created for all processes and running QEMU KVM guests
>> is more complicated than it should be.
> 
> To run KVM guests you need to issue a single sysctl to set vm.allocate_pgste,
> this is the best solution we found so far.

Suse and Ubuntu seem to have a sysctl.conf file in the qemu-kvm package that
does a global switch.


> 
>> Unfortunately, converting all page tables to 4k pgste page tables is
>> not possible without provoking various race conditions.
> 
> That is one approach we tried and was found to be buggy. The point is that
> you are not allowed to reallocate a page table while a VMA exists that is
> in the address range of that page table.
> 
> Another approach we tried is to use an ELF flag on the qemu executable.
> That does not work either because fs/exec.c allocates and populates the
> new mm struct for the argument pages before fs/binfmt_elf.c comes into
> play.
> 
>> However, we
>> might be able to let 2k and 4k page tables co-exist. We only need
>> 4k page tables whenever we want to expose such memory to a guest. So
>> turning on 4k page table allocation at one point and only allowing such
>> memory to go into our gmap (guest mapping) might be a solution.
>> User space tools like QEMU that create the VM before mmap-ing any memory
>> that will belong to the guest can simply use the new VM type. Proper 4k
>> page tables will be created for any memory mmap-ed afterwards. And these
>> can be used in the gmap without problems. Existing user space tools
>> will work as before - having to enable vm.alloc_pgste explicitly.
> 
> I can not say that I like this approach. Right now a process either uses
> 2K page tables or 4K page tables. With your patch it is basically per page
> table page. Memory areas that existed before the switch to allocate
> 4K page tables can not be mapped to the guests gmap anymore. There might
> be hidden pitfalls e.g. with guest migration.
> 
>> This should play fine with vSIE, as vSIE code works completely on the gmap.
>> So if only page tables with pgste go into our gmap, we should be fine.
>>
>> Not sure if this breaks important concepts, has some serious performance
>> problems or I am missing important cases. If so, I guess there is really
>> no way to avoid setting vm.alloc_pgste.
>>
>> Possible modifications:
>> - Enable this option via an ioctl (like KVM_S390_ENABLE_SIE) instead of
>>   a new VM type
>> - Remember if we have mixed pgtables. If !mixed, we can make maybe faster
>>   decisions (if that is really a problem).
> 
> What I do not like in particular is this function:
> 
> static inline int pgtable_has_pgste(struct mm_struct *mm, unsigned long addr)
> {
> 	struct page *page;
> 
> 	if (!mm_has_pgste(mm))
> 		return 0;
> 
> 	page = pfn_to_page(addr >> PAGE_SHIFT);
> 	return atomic_read(&page->_mapcount) & 0x4U;
> }

The good thing with this approach is that the first condition will make non-KVM
processes as fast as before. In fact, given the sysctl thing being present everywhere,
this patch might actually move non-KVM processes back to 2k page tables so it
improve those.


> 
> The check for pgstes got more complicated, it used to be a test-under-mask
> of a bit in the mm struct and a branch. Now we have an additional pfn_to_page,
> an atomic_read and a bit test. That is done multiple times for every ptep_xxx
> operation. 
> 
> Is the operational simplification of not having to set vm.allocate_pgste really
> that important ?
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-01 10:46 ` [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste Martin Schwidefsky
  2017-06-01 11:24   ` Christian Borntraeger
@ 2017-06-01 11:27   ` David Hildenbrand
  2017-06-02  7:06     ` Heiko Carstens
  2017-06-02  7:02   ` Heiko Carstens
  2 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2017-06-01 11:27 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: kvm, linux-kernel, Heiko Carstens, Thomas Huth, Christian Borntraeger

Hi Martin,

thanks for having a look at this!

>> However, we
>> might be able to let 2k and 4k page tables co-exist. We only need
>> 4k page tables whenever we want to expose such memory to a guest. So
>> turning on 4k page table allocation at one point and only allowing such
>> memory to go into our gmap (guest mapping) might be a solution.
>> User space tools like QEMU that create the VM before mmap-ing any memory
>> that will belong to the guest can simply use the new VM type. Proper 4k
>> page tables will be created for any memory mmap-ed afterwards. And these
>> can be used in the gmap without problems. Existing user space tools
>> will work as before - having to enable vm.alloc_pgste explicitly.
> 
> I can not say that I like this approach. Right now a process either uses
> 2K page tables or 4K page tables. With your patch it is basically per page
> table page. Memory areas that existed before the switch to allocate
> 4K page tables can not be mapped to the guests gmap anymore. There might
> be hidden pitfalls e.g. with guest migration.

As long as QEMU knows what it is doing, I don't see problems with
migration. Even for migration, all memory is mmaped afterwards. But yes,
we have to think about this carefully. (that's why QEMU explicitly has
to switch this behavior on, to not break user space that mmaps it
differently).

> 
>> This should play fine with vSIE, as vSIE code works completely on the gmap.
>> So if only page tables with pgste go into our gmap, we should be fine.
>>
>> Not sure if this breaks important concepts, has some serious performance
>> problems or I am missing important cases. If so, I guess there is really
>> no way to avoid setting vm.alloc_pgste.
>>
>> Possible modifications:
>> - Enable this option via an ioctl (like KVM_S390_ENABLE_SIE) instead of
>>   a new VM type
>> - Remember if we have mixed pgtables. If !mixed, we can make maybe faster
>>   decisions (if that is really a problem).
> 
> What I do not like in particular is this function:
> 
> static inline int pgtable_has_pgste(struct mm_struct *mm, unsigned long addr)
> {
> 	struct page *page;
> 
> 	if (!mm_has_pgste(mm))
> 		return 0;
> 
> 	page = pfn_to_page(addr >> PAGE_SHIFT);
> 	return atomic_read(&page->_mapcount) & 0x4U;
> }
> 
> The check for pgstes got more complicated, it used to be a test-under-mask
> of a bit in the mm struct and a branch. Now we have an additional pfn_to_page,
> an atomic_read and a bit test. That is done multiple times for every ptep_xxx
> operation. 

The "heavy overhead" only applies to processes using KVM, ordinary
processes only have one bit test (just as before !mm_has_pgste(mm)).

Can you judge how much overhead this could be in practice? (esp: can it
be noticed?)

We could of course "tune" the functions to reduce the number of
pgtable_has_pgste() but we won't get rid of at least one such
pfn_to_page. And the question is if already one such test degrades
performance noticeably. Unfortunately the z/VM test system I have is not
really the best fit for performance tests.

> 
> Is the operational simplification of not having to set vm.allocate_pgste really
> that important ?
> 

The problem is that once we have a package that installs QEMU, and we
don't want to completely confuse the user (let him care about
vm.allocate_pgste), we have to set this automatically.

E.g. see https://bugzilla.redhat.com/show_bug.cgi?id=1290589

a) install a config file that will set it on every boot (like fedora)
b) set vm.allocate_pgste=1 right away, so QEMU can be used without a reboot

Now, this is possible, but as you know this can implicitly influence
other workload (double size of page tables for everything). Esp.
thinking about systems that are not pure KVM hosts. So by simply
installing the QEMU package, we change system behavior. Even if QEMU
will only be used sometimes on that host.

So actually what we want is : PGSTES only for processes that actually
need it and handle the details internally.

I don't really see too many ways to do this differently. The more we can
hide "internal specialties", the better. E.g. think about guests being
based on huge pages completely someday in the future: the concept of
pgstes does not apply here. Still, pgste will have to be enabled for the
complete system and there isn't too much we can do about it
(s390_enable_sie can't know that QEMU will only use huge pages, so we
would need yet another VM type in QEMU to handle guests that can only
map huge pages).

An alternative: Have some process that enables PGSTE for all of its
future children. Fork+execv qemu. However, such a process in between
will most likely confuse tooling like libvirt when it comes to process ids.

Can you think of any other way to handle this? Setting
vm.allocate_pgste=1 should really be avoided if possible.


Thanks a lot again!


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k)
  2017-05-29 16:32 ` [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k) David Hildenbrand
@ 2017-06-01 11:39   ` Christian Borntraeger
  2017-06-01 12:44     ` David Hildenbrand
  2017-06-01 12:59   ` David Hildenbrand
  1 sibling, 1 reply; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-01 11:39 UTC (permalink / raw)
  To: David Hildenbrand, kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth

On 05/29/2017 06:32 PM, David Hildenbrand wrote:

>  	new = old = pgste_get_lock(ptep);
>  	pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT |
> @@ -748,6 +764,11 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr)
>  	ptep = get_locked_pte(mm, addr, &ptl);
>  	if (unlikely(!ptep))
>  		return -EFAULT;
> +	if (!pgtable_has_pgste(mm, __pa(ptep))) {
> +		pte_unmap_unlock(ptep, ptl);
> +		WARN_ONCE(true, "Guest address on page table without pgste");

All these WARN_ONCE. Is there a way how a malicious user can trigger this or is this checked
everywhere and triggered would be indeed a bug?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k)
  2017-06-01 11:39   ` Christian Borntraeger
@ 2017-06-01 12:44     ` David Hildenbrand
  0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2017-06-01 12:44 UTC (permalink / raw)
  To: Christian Borntraeger, kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth

On 01.06.2017 13:39, Christian Borntraeger wrote:
> On 05/29/2017 06:32 PM, David Hildenbrand wrote:
> 
>>  	new = old = pgste_get_lock(ptep);
>>  	pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT |
>> @@ -748,6 +764,11 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr)
>>  	ptep = get_locked_pte(mm, addr, &ptl);
>>  	if (unlikely(!ptep))
>>  		return -EFAULT;
>> +	if (!pgtable_has_pgste(mm, __pa(ptep))) {
>> +		pte_unmap_unlock(ptep, ptl);
>> +		WARN_ONCE(true, "Guest address on page table without pgste");
> 
> All these WARN_ONCE. Is there a way how a malicious user can trigger this or is this checked
> everywhere and triggered would be indeed a bug?

Very good question I added these for testing purposes, but leaving the
WARN_ONCE here is wrong.

The user can create memslots with "wrong" memory. Whenever such memory
is linked into the gmap, we return -EFAULT. So we will only have page
table with "pgstes" in our GMAP at any time.

However, all these functions here go via memslots:

test_and_clear_guest_dirty
-> via memslot from memslot list

set_guest_storage_key
reset_guest_reference_bit
get_guest_storage_key
pgste_perform_essa
set_pgste_bits
get_pgste
->  come via gfn_to_hva() -> gfn_to_memslot() -> search_memslots -> via
memslot list


And then use the calculated host address to just walk the ordinary
process page tables (get_locked_pte) and not the pgste.

So simply returning -EFAULT here is the right thing to, dropping the
WARN_ONCE.

Thanks!

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k)
  2017-05-29 16:32 ` [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k) David Hildenbrand
  2017-06-01 11:39   ` Christian Borntraeger
@ 2017-06-01 12:59   ` David Hildenbrand
  2017-06-02  7:11     ` Christian Borntraeger
  1 sibling, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2017-06-01 12:59 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth,
	Christian Borntraeger


> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
> index d4d409b..b22c2b6 100644
> --- a/arch/s390/mm/pgtable.c
> +++ b/arch/s390/mm/pgtable.c
> @@ -196,7 +196,7 @@ static inline pgste_t ptep_xchg_start(struct mm_struct *mm,
>  {
>  	pgste_t pgste = __pgste(0);
>  
> -	if (mm_has_pgste(mm)) {
> +	if (pgtable_has_pgste(mm, __pa(ptep))) {
>  		pgste = pgste_get_lock(ptep);
>  		pgste = pgste_pte_notify(mm, addr, ptep, pgste);
>  	}
> @@ -207,7 +207,7 @@ static inline pte_t ptep_xchg_commit(struct mm_struct *mm,
>  				    unsigned long addr, pte_t *ptep,
>  				    pgste_t pgste, pte_t old, pte_t new)
>  {
> -	if (mm_has_pgste(mm)) {
> +	if (pgtable_has_pgste(mm, __pa(ptep))) {
>  		if (pte_val(old) & _PAGE_INVALID)
>  			pgste_set_key(ptep, pgste, new, mm);
>  		if (pte_val(new) & _PAGE_INVALID) 

I think these two checks are wrong. We really have to test here the
mapcount bit only (relying on mm_has_pgste(mm) is wrong in case global
vm.allocate_pgste ist set).

But before I continue working on this, I think it makes sense to clarify
if something like that would be acceptable at all.

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-01 10:46 ` [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste Martin Schwidefsky
  2017-06-01 11:24   ` Christian Borntraeger
  2017-06-01 11:27   ` David Hildenbrand
@ 2017-06-02  7:02   ` Heiko Carstens
  2017-06-02  7:13     ` Christian Borntraeger
                       ` (2 more replies)
  2 siblings, 3 replies; 30+ messages in thread
From: Heiko Carstens @ 2017-06-02  7:02 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: David Hildenbrand, kvm, linux-kernel, Thomas Huth, Christian Borntraeger

On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:
> > Unfortunately, converting all page tables to 4k pgste page tables is
> > not possible without provoking various race conditions.
> 
> That is one approach we tried and was found to be buggy. The point is that
> you are not allowed to reallocate a page table while a VMA exists that is
> in the address range of that page table.
> 
> Another approach we tried is to use an ELF flag on the qemu executable.
> That does not work either because fs/exec.c allocates and populates the
> new mm struct for the argument pages before fs/binfmt_elf.c comes into
> play.

How about if you would fail the system call within arch_check_elf() if you
detect that the binary requires pgstes (as indicated by elf flags) and then
restart the system call?

That is: arch_check_elf() e.g. would set a thread flag that future mm's
should be allocated with pgstes. Then do_execve() would cleanup everything
and return to entry.S. Upon return to userspace we detect this condition
and simply restart the system call, similar to signals vs -ERESTARTSYS.

That would make do_execve() cleanup everything and upon reentering it would
allocate an mm with the pgste flag set.

Maybe this is a bit over-simplified, but might work.

At least I also don't like the next "hack", that is specifically designed
to only work with how QEMU is currently implemented. It might break with
future QEMU changes or the next user space implementation that drives the
kvm interface, but is doing everything differently.
Let's look for a "clean" solution that will always work. We had too many
hacks for this problem and *all* of them were broken.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-01 11:27   ` David Hildenbrand
@ 2017-06-02  7:06     ` Heiko Carstens
  0 siblings, 0 replies; 30+ messages in thread
From: Heiko Carstens @ 2017-06-02  7:06 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Martin Schwidefsky, kvm, linux-kernel, Thomas Huth,
	Christian Borntraeger

On Thu, Jun 01, 2017 at 01:27:28PM +0200, David Hildenbrand wrote:
> An alternative: Have some process that enables PGSTE for all of its
> future children. Fork+execv qemu. However, such a process in between
> will most likely confuse tooling like libvirt when it comes to process ids.

That would be something like sys_personality() vs setarch, e.g. adding a
new s390 specific personality flag and then do something like

setarch s390x --kvm qemu...

However I'm afraid that this is also not acceptable from a usability point
of view.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k)
  2017-06-01 12:59   ` David Hildenbrand
@ 2017-06-02  7:11     ` Christian Borntraeger
  0 siblings, 0 replies; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-02  7:11 UTC (permalink / raw)
  To: David Hildenbrand, kvm
  Cc: linux-kernel, Martin Schwidefsky, Heiko Carstens, Thomas Huth

On 06/01/2017 02:59 PM, David Hildenbrand wrote:
> 
>> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
>> index d4d409b..b22c2b6 100644
>> --- a/arch/s390/mm/pgtable.c
>> +++ b/arch/s390/mm/pgtable.c
>> @@ -196,7 +196,7 @@ static inline pgste_t ptep_xchg_start(struct mm_struct *mm,
>>  {
>>  	pgste_t pgste = __pgste(0);
>>  
>> -	if (mm_has_pgste(mm)) {
>> +	if (pgtable_has_pgste(mm, __pa(ptep))) {
>>  		pgste = pgste_get_lock(ptep);
>>  		pgste = pgste_pte_notify(mm, addr, ptep, pgste);
>>  	}
>> @@ -207,7 +207,7 @@ static inline pte_t ptep_xchg_commit(struct mm_struct *mm,
>>  				    unsigned long addr, pte_t *ptep,
>>  				    pgste_t pgste, pte_t old, pte_t new)
>>  {
>> -	if (mm_has_pgste(mm)) {
>> +	if (pgtable_has_pgste(mm, __pa(ptep))) {
>>  		if (pte_val(old) & _PAGE_INVALID)
>>  			pgste_set_key(ptep, pgste, new, mm);
>>  		if (pte_val(new) & _PAGE_INVALID) 
> 
> I think these two checks are wrong. We really have to test here the
> mapcount bit only (relying on mm_has_pgste(mm) is wrong in case global
> vm.allocate_pgste ist set).
> 
> But before I continue working on this, I think it makes sense to clarify
> if something like that would be acceptable at all.

I think that is up to Martin to decide. Given the fact that Fedora, SUSE, Ubuntu always
enable this sysctl when the qemu package is installed (other distros as well?) I think 
that we should really think about changing things. I see 2 options:

1. dropping 2k page tables completely
pro:	- simplifies pagetable code (e.g. look at page_table_alloc code)
	- we could get rid of a lock in the pgtable allocation path (mm->context.pgtable_lock)
	- I am not aware of any performance impact due to the 4k page tables
	- transparent for old QEMUs
	- KVM will work out of the box
con: 	- higher page table memory usage for non-KVM processes

2. go with your approach
pro: 	- lower page table memory usage for non-KVM processes
	- KVM will work out of the box
	- no addtl overhead for non-KVM processes
con:	- higher overhead for KVM processes during paging (since we are going to use IPTE
	or friends anyway, the question is: does it matter?)
	- needs QEMU change

Christian

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:02   ` Heiko Carstens
@ 2017-06-02  7:13     ` Christian Borntraeger
  2017-06-02  7:16       ` Martin Schwidefsky
  2017-06-02  9:46     ` Martin Schwidefsky
  2017-06-02 10:54     ` David Hildenbrand
  2 siblings, 1 reply; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-02  7:13 UTC (permalink / raw)
  To: Heiko Carstens, Martin Schwidefsky
  Cc: David Hildenbrand, kvm, linux-kernel, Thomas Huth

On 06/02/2017 09:02 AM, Heiko Carstens wrote:
> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:
>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>> not possible without provoking various race conditions.
>>
>> That is one approach we tried and was found to be buggy. The point is that
>> you are not allowed to reallocate a page table while a VMA exists that is
>> in the address range of that page table.
>>
>> Another approach we tried is to use an ELF flag on the qemu executable.
>> That does not work either because fs/exec.c allocates and populates the
>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
>> play.
> 
> How about if you would fail the system call within arch_check_elf() if you
> detect that the binary requires pgstes (as indicated by elf flags) and then
> restart the system call?
> 
> That is: arch_check_elf() e.g. would set a thread flag that future mm's
> should be allocated with pgstes. Then do_execve() would cleanup everything
> and return to entry.S. Upon return to userspace we detect this condition
> and simply restart the system call, similar to signals vs -ERESTARTSYS.
> 
> That would make do_execve() cleanup everything and upon reentering it would
> allocate an mm with the pgste flag set.
> 
> Maybe this is a bit over-simplified, but might work.
> 
> At least I also don't like the next "hack", that is specifically designed
> to only work with how QEMU is currently implemented. It might break with
> future QEMU changes or the next user space implementation that drives the
> kvm interface, but is doing everything differently.
> Let's look for a "clean" solution that will always work. We had too many
> hacks for this problem and *all* of them were broken.


The more I think about it, dropping 2k page tables and always allocate a full
page would simplify pgalloc. As far I can see this would also get rid of
the &mm->context.pgtable_lock.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:13     ` Christian Borntraeger
@ 2017-06-02  7:16       ` Martin Schwidefsky
  2017-06-02  7:18         ` Christian Borntraeger
  0 siblings, 1 reply; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-02  7:16 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel, Thomas Huth

On Fri, 2 Jun 2017 09:13:03 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 06/02/2017 09:02 AM, Heiko Carstens wrote:
> > On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:  
> >>> Unfortunately, converting all page tables to 4k pgste page tables is
> >>> not possible without provoking various race conditions.  
> >>
> >> That is one approach we tried and was found to be buggy. The point is that
> >> you are not allowed to reallocate a page table while a VMA exists that is
> >> in the address range of that page table.
> >>
> >> Another approach we tried is to use an ELF flag on the qemu executable.
> >> That does not work either because fs/exec.c allocates and populates the
> >> new mm struct for the argument pages before fs/binfmt_elf.c comes into
> >> play.  
> > 
> > How about if you would fail the system call within arch_check_elf() if you
> > detect that the binary requires pgstes (as indicated by elf flags) and then
> > restart the system call?
> > 
> > That is: arch_check_elf() e.g. would set a thread flag that future mm's
> > should be allocated with pgstes. Then do_execve() would cleanup everything
> > and return to entry.S. Upon return to userspace we detect this condition
> > and simply restart the system call, similar to signals vs -ERESTARTSYS.
> > 
> > That would make do_execve() cleanup everything and upon reentering it would
> > allocate an mm with the pgste flag set.
> > 
> > Maybe this is a bit over-simplified, but might work.
> > 
> > At least I also don't like the next "hack", that is specifically designed
> > to only work with how QEMU is currently implemented. It might break with
> > future QEMU changes or the next user space implementation that drives the
> > kvm interface, but is doing everything differently.
> > Let's look for a "clean" solution that will always work. We had too many
> > hacks for this problem and *all* of them were broken.  
> 
> 
> The more I think about it, dropping 2k page tables and always allocate a full
> page would simplify pgalloc. As far I can see this would also get rid of
> the &mm->context.pgtable_lock.
 
And it would waste twice the amount of memory for page tables. NAK.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:16       ` Martin Schwidefsky
@ 2017-06-02  7:18         ` Christian Borntraeger
  2017-06-02  7:25           ` Christian Borntraeger
  0 siblings, 1 reply; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-02  7:18 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel, Thomas Huth

On 06/02/2017 09:16 AM, Martin Schwidefsky wrote:
> On Fri, 2 Jun 2017 09:13:03 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
>> On 06/02/2017 09:02 AM, Heiko Carstens wrote:
>>> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:  
>>>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>>>> not possible without provoking various race conditions.  
>>>>
>>>> That is one approach we tried and was found to be buggy. The point is that
>>>> you are not allowed to reallocate a page table while a VMA exists that is
>>>> in the address range of that page table.
>>>>
>>>> Another approach we tried is to use an ELF flag on the qemu executable.
>>>> That does not work either because fs/exec.c allocates and populates the
>>>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
>>>> play.  
>>>
>>> How about if you would fail the system call within arch_check_elf() if you
>>> detect that the binary requires pgstes (as indicated by elf flags) and then
>>> restart the system call?
>>>
>>> That is: arch_check_elf() e.g. would set a thread flag that future mm's
>>> should be allocated with pgstes. Then do_execve() would cleanup everything
>>> and return to entry.S. Upon return to userspace we detect this condition
>>> and simply restart the system call, similar to signals vs -ERESTARTSYS.
>>>
>>> That would make do_execve() cleanup everything and upon reentering it would
>>> allocate an mm with the pgste flag set.
>>>
>>> Maybe this is a bit over-simplified, but might work.
>>>
>>> At least I also don't like the next "hack", that is specifically designed
>>> to only work with how QEMU is currently implemented. It might break with
>>> future QEMU changes or the next user space implementation that drives the
>>> kvm interface, but is doing everything differently.
>>> Let's look for a "clean" solution that will always work. We had too many
>>> hacks for this problem and *all* of them were broken.  
>>
>>
>> The more I think about it, dropping 2k page tables and always allocate a full
>> page would simplify pgalloc. As far I can see this would also get rid of
>> the &mm->context.pgtable_lock.
> 
> And it would waste twice the amount of memory for page tables. NAK.

Yes and we spend the same amount of memory TODAY, because every distro on the
planet that uses KVM has sysctl.allocate_pgste set.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:18         ` Christian Borntraeger
@ 2017-06-02  7:25           ` Christian Borntraeger
  2017-06-02  8:11             ` Martin Schwidefsky
  0 siblings, 1 reply; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-02  7:25 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel, Thomas Huth

On 06/02/2017 09:18 AM, Christian Borntraeger wrote:
> On 06/02/2017 09:16 AM, Martin Schwidefsky wrote:
>> On Fri, 2 Jun 2017 09:13:03 +0200
>> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
>>
>>> On 06/02/2017 09:02 AM, Heiko Carstens wrote:
>>>> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:  
>>>>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>>>>> not possible without provoking various race conditions.  
>>>>>
>>>>> That is one approach we tried and was found to be buggy. The point is that
>>>>> you are not allowed to reallocate a page table while a VMA exists that is
>>>>> in the address range of that page table.
>>>>>
>>>>> Another approach we tried is to use an ELF flag on the qemu executable.
>>>>> That does not work either because fs/exec.c allocates and populates the
>>>>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
>>>>> play.  
>>>>
>>>> How about if you would fail the system call within arch_check_elf() if you
>>>> detect that the binary requires pgstes (as indicated by elf flags) and then
>>>> restart the system call?
>>>>
>>>> That is: arch_check_elf() e.g. would set a thread flag that future mm's
>>>> should be allocated with pgstes. Then do_execve() would cleanup everything
>>>> and return to entry.S. Upon return to userspace we detect this condition
>>>> and simply restart the system call, similar to signals vs -ERESTARTSYS.
>>>>
>>>> That would make do_execve() cleanup everything and upon reentering it would
>>>> allocate an mm with the pgste flag set.
>>>>
>>>> Maybe this is a bit over-simplified, but might work.
>>>>
>>>> At least I also don't like the next "hack", that is specifically designed
>>>> to only work with how QEMU is currently implemented. It might break with
>>>> future QEMU changes or the next user space implementation that drives the
>>>> kvm interface, but is doing everything differently.
>>>> Let's look for a "clean" solution that will always work. We had too many
>>>> hacks for this problem and *all* of them were broken.  
>>>
>>>
>>> The more I think about it, dropping 2k page tables and always allocate a full
>>> page would simplify pgalloc. As far I can see this would also get rid of
>>> the &mm->context.pgtable_lock.
>>
>> And it would waste twice the amount of memory for page tables. NAK.
> 
> Yes and we spend the same amount of memory TODAY, because every distro on the
> planet that uses KVM has sysctl.allocate_pgste set.

Maybe todays approach might be still the best. (if qemu is installed, its all
4k, if not its all 2k)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:25           ` Christian Borntraeger
@ 2017-06-02  8:11             ` Martin Schwidefsky
  0 siblings, 0 replies; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-02  8:11 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel, Thomas Huth

On Fri, 2 Jun 2017 09:25:54 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 06/02/2017 09:18 AM, Christian Borntraeger wrote:
> > On 06/02/2017 09:16 AM, Martin Schwidefsky wrote:  
> >> On Fri, 2 Jun 2017 09:13:03 +0200
> >> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> >>  
> >>> On 06/02/2017 09:02 AM, Heiko Carstens wrote:  
> >>>> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:    
> >>>>>> Unfortunately, converting all page tables to 4k pgste page tables is
> >>>>>> not possible without provoking various race conditions.    
> >>>>>
> >>>>> That is one approach we tried and was found to be buggy. The point is that
> >>>>> you are not allowed to reallocate a page table while a VMA exists that is
> >>>>> in the address range of that page table.
> >>>>>
> >>>>> Another approach we tried is to use an ELF flag on the qemu executable.
> >>>>> That does not work either because fs/exec.c allocates and populates the
> >>>>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
> >>>>> play.    
> >>>>
> >>>> How about if you would fail the system call within arch_check_elf() if you
> >>>> detect that the binary requires pgstes (as indicated by elf flags) and then
> >>>> restart the system call?
> >>>>
> >>>> That is: arch_check_elf() e.g. would set a thread flag that future mm's
> >>>> should be allocated with pgstes. Then do_execve() would cleanup everything
> >>>> and return to entry.S. Upon return to userspace we detect this condition
> >>>> and simply restart the system call, similar to signals vs -ERESTARTSYS.
> >>>>
> >>>> That would make do_execve() cleanup everything and upon reentering it would
> >>>> allocate an mm with the pgste flag set.
> >>>>
> >>>> Maybe this is a bit over-simplified, but might work.
> >>>>
> >>>> At least I also don't like the next "hack", that is specifically designed
> >>>> to only work with how QEMU is currently implemented. It might break with
> >>>> future QEMU changes or the next user space implementation that drives the
> >>>> kvm interface, but is doing everything differently.
> >>>> Let's look for a "clean" solution that will always work. We had too many
> >>>> hacks for this problem and *all* of them were broken.    
> >>>
> >>>
> >>> The more I think about it, dropping 2k page tables and always allocate a full
> >>> page would simplify pgalloc. As far I can see this would also get rid of
> >>> the &mm->context.pgtable_lock.  
> >>
> >> And it would waste twice the amount of memory for page tables. NAK.  
> > 
> > Yes and we spend the same amount of memory TODAY, because every distro on the
> > planet that uses KVM has sysctl.allocate_pgste set.  
> 
> Maybe todays approach might be still the best. (if qemu is installed, its all
> 4k, if not its all 2k)

Exactly-

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:02   ` Heiko Carstens
  2017-06-02  7:13     ` Christian Borntraeger
@ 2017-06-02  9:46     ` Martin Schwidefsky
  2017-06-02 10:19       ` Christian Borntraeger
  2017-06-02 10:28       ` Heiko Carstens
  2017-06-02 10:54     ` David Hildenbrand
  2 siblings, 2 replies; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-02  9:46 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: David Hildenbrand, kvm, linux-kernel, Thomas Huth, Christian Borntraeger

On Fri, 2 Jun 2017 09:02:10 +0200
Heiko Carstens <heiko.carstens@de.ibm.com> wrote:

> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:
> > > Unfortunately, converting all page tables to 4k pgste page tables is
> > > not possible without provoking various race conditions.  
> > 
> > That is one approach we tried and was found to be buggy. The point is that
> > you are not allowed to reallocate a page table while a VMA exists that is
> > in the address range of that page table.
> > 
> > Another approach we tried is to use an ELF flag on the qemu executable.
> > That does not work either because fs/exec.c allocates and populates the
> > new mm struct for the argument pages before fs/binfmt_elf.c comes into
> > play.  
> 
> How about if you would fail the system call within arch_check_elf() if you
> detect that the binary requires pgstes (as indicated by elf flags) and then
> restart the system call?
> 
> That is: arch_check_elf() e.g. would set a thread flag that future mm's
> should be allocated with pgstes. Then do_execve() would cleanup everything
> and return to entry.S. Upon return to userspace we detect this condition
> and simply restart the system call, similar to signals vs -ERESTARTSYS.
> 
> That would make do_execve() cleanup everything and upon reentering it would
> allocate an mm with the pgste flag set.
> 
> Maybe this is a bit over-simplified, but might work.

This is not over-simplified at all, that does work:
--
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 69a77eecaec1..7bd182676ddd 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -64,6 +64,7 @@ config ARCH_SUPPORTS_UPROBES
 
 config S390
 	def_bool y
+	select ARCH_BINFMT_ELF_STATE
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
index e8f623041769..79911231f9e6 100644
--- a/arch/s390/include/asm/elf.h
+++ b/arch/s390/include/asm/elf.h
@@ -151,6 +151,28 @@ extern unsigned int vdso_enabled;
 	 && (x)->e_ident[EI_CLASS] == ELF_CLASS)
 #define compat_start_thread	start_thread31
 
+struct arch_elf_state {
+};
+
+#define INIT_ARCH_ELF_STATE { }
+
+#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state) (0)
+#define arch_check_elf(ehdr, interp, interp_ehdr, state)	\
+({								\
+	struct elf64_hdr *hdr = (void*) ehdr;			\
+	int _rc = 0;						\
+	if (hdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
+	    (hdr->e_flags & 0x00000002) &&			\
+	    !page_table_allocate_pgste &&			\
+	    !current->mm->context.alloc_pgste) {		\
+		current->mm->context.alloc_pgste = 1;		\
+		set_pt_regs_flag(task_pt_regs(current),		\
+				 PIF_SYSCALL_RESTART);		\
+		_rc = -EAGAIN;					\
+	}							\
+	_rc;							\
+})
+
 /* For SVR4/S390 the function pointer to be registered with `atexit` is
    passed in R14. */
 #define ELF_PLAT_INIT(_r, load_addr) \
diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
index c119d564d8f2..268a5d22ce1b 100644
--- a/arch/s390/include/asm/mmu_context.h
+++ b/arch/s390/include/asm/mmu_context.h
@@ -25,7 +25,8 @@ static inline int init_new_context(struct task_struct *tsk,
 	mm->context.gmap_asce = 0;
 	mm->context.flush_mm = 0;
 #ifdef CONFIG_PGSTE
-	mm->context.alloc_pgste = page_table_allocate_pgste;
+	mm->context.alloc_pgste = page_table_allocate_pgste ||
+		current->mm->context.alloc_pgste;
 	mm->context.has_pgste = 0;
 	mm->context.use_skey = 0;
 	mm->context.use_cmma = 0;
diff --git a/arch/s390/include/asm/ptrace.h b/arch/s390/include/asm/ptrace.h
index 99bc456cc26a..24baa80f7af6 100644
--- a/arch/s390/include/asm/ptrace.h
+++ b/arch/s390/include/asm/ptrace.h
@@ -11,9 +11,11 @@
 
 #define PIF_SYSCALL		0	/* inside a system call */
 #define PIF_PER_TRAP		1	/* deliver sigtrap on return to user */
+#define PIF_SYSCALL_RESTART	2	/* restart the current system call */
 
 #define _PIF_SYSCALL		_BITUL(PIF_SYSCALL)
 #define _PIF_PER_TRAP		_BITUL(PIF_PER_TRAP)
+#define _PIF_SYSCALL_RESTART	_BITUL(PIF_SYSCALL_RESTART)
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
index 0c2c3b8bfc9a..8c824b32527a 100644
--- a/arch/s390/kernel/entry.S
+++ b/arch/s390/kernel/entry.S
@@ -52,7 +52,7 @@ _TIF_TRACE	= (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \
 		   _TIF_SYSCALL_TRACEPOINT)
 _CIF_WORK	= (_CIF_MCCK_PENDING | _CIF_ASCE_PRIMARY | \
 		   _CIF_ASCE_SECONDARY | _CIF_FPU)
-_PIF_WORK	= (_PIF_PER_TRAP)
+_PIF_WORK	= (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART)
 
 #define BASED(name) name-cleanup_critical(%r13)
 
@@ -342,6 +342,8 @@ ENTRY(system_call)
 	jo	.Lsysc_guarded_storage
 	TSTMSK	__PT_FLAGS(%r11),_PIF_PER_TRAP
 	jo	.Lsysc_singlestep
+	TSTMSK	__PT_FLAGS(%r11),_PIF_SYSCALL_RESTART
+	jo	.Lsysc_syscall_restart
 	TSTMSK	__TI_flags(%r12),_TIF_SIGPENDING
 	jo	.Lsysc_sigpending
 	TSTMSK	__TI_flags(%r12),_TIF_NOTIFY_RESUME
@@ -434,6 +436,15 @@ ENTRY(system_call)
 	jg	do_per_trap
 
 #
+# _PIF_SYSCALL_RESTART is set, repeat the current system call
+#
+.Lsysc_syscall_restart:
+	ni	__PT_FLAGS+7(%r11),255-_PIF_SYSCALL_RESTART
+	lmg	%r1,%r7,__PT_R1(%r11)	# load svc arguments
+	lg	%r2,__PT_ORIG_GPR2(%r11)
+	j	.Lsysc_do_svc
+
+#
 # call tracehook_report_syscall_entry/tracehook_report_syscall_exit before
 # and after the system call
 #
-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  9:46     ` Martin Schwidefsky
@ 2017-06-02 10:19       ` Christian Borntraeger
  2017-06-02 10:53         ` Martin Schwidefsky
  2017-06-02 10:28       ` Heiko Carstens
  1 sibling, 1 reply; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-02 10:19 UTC (permalink / raw)
  To: Martin Schwidefsky, Heiko Carstens
  Cc: David Hildenbrand, kvm, linux-kernel, Thomas Huth

On 06/02/2017 11:46 AM, Martin Schwidefsky wrote:
> On Fri, 2 Jun 2017 09:02:10 +0200
> Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
> 
>> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:
>>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>>> not possible without provoking various race conditions.  
>>>
>>> That is one approach we tried and was found to be buggy. The point is that
>>> you are not allowed to reallocate a page table while a VMA exists that is
>>> in the address range of that page table.
>>>
>>> Another approach we tried is to use an ELF flag on the qemu executable.
>>> That does not work either because fs/exec.c allocates and populates the
>>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
>>> play.  
>>
>> How about if you would fail the system call within arch_check_elf() if you
>> detect that the binary requires pgstes (as indicated by elf flags) and then
>> restart the system call?
>>
>> That is: arch_check_elf() e.g. would set a thread flag that future mm's
>> should be allocated with pgstes. Then do_execve() would cleanup everything
>> and return to entry.S. Upon return to userspace we detect this condition
>> and simply restart the system call, similar to signals vs -ERESTARTSYS.
>>
>> That would make do_execve() cleanup everything and upon reentering it would
>> allocate an mm with the pgste flag set.
>>
>> Maybe this is a bit over-simplified, but might work.
> 
> This is not over-simplified at all, that does work:


Nice. Next question is how to integrate that elf flag into the qemu
build environment.

> --
> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index 69a77eecaec1..7bd182676ddd 100644
> --- a/arch/s390/Kconfig
> +++ b/arch/s390/Kconfig
> @@ -64,6 +64,7 @@ config ARCH_SUPPORTS_UPROBES
> 
>  config S390
>  	def_bool y
> +	select ARCH_BINFMT_ELF_STATE
>  	select ARCH_HAS_DEVMEM_IS_ALLOWED
>  	select ARCH_HAS_ELF_RANDOMIZE
>  	select ARCH_HAS_GCOV_PROFILE_ALL
> diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
> index e8f623041769..79911231f9e6 100644
> --- a/arch/s390/include/asm/elf.h
> +++ b/arch/s390/include/asm/elf.h
> @@ -151,6 +151,28 @@ extern unsigned int vdso_enabled;
>  	 && (x)->e_ident[EI_CLASS] == ELF_CLASS)
>  #define compat_start_thread	start_thread31
> 
> +struct arch_elf_state {
> +};
> +
> +#define INIT_ARCH_ELF_STATE { }
> +
> +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state) (0)
> +#define arch_check_elf(ehdr, interp, interp_ehdr, state)	\
> +({								\
> +	struct elf64_hdr *hdr = (void*) ehdr;			\
> +	int _rc = 0;						\
> +	if (hdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> +	    (hdr->e_flags & 0x00000002) &&			\

shall we define some EF_S390_PGSTE flags in elf.h for this number?
What is 0x00000001 by the way?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  9:46     ` Martin Schwidefsky
  2017-06-02 10:19       ` Christian Borntraeger
@ 2017-06-02 10:28       ` Heiko Carstens
  2017-06-02 10:48         ` Martin Schwidefsky
  1 sibling, 1 reply; 30+ messages in thread
From: Heiko Carstens @ 2017-06-02 10:28 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: David Hildenbrand, kvm, linux-kernel, Thomas Huth, Christian Borntraeger

On Fri, Jun 02, 2017 at 11:46:47AM +0200, Martin Schwidefsky wrote:
> On Fri, 2 Jun 2017 09:02:10 +0200
> Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
> > Maybe this is a bit over-simplified, but might work.
> This is not over-simplified at all, that does work:

Good!

> +struct arch_elf_state {
> +};
> +
> +#define INIT_ARCH_ELF_STATE { }
> +
> +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state) (0)
> +#define arch_check_elf(ehdr, interp, interp_ehdr, state)	\
> +({								\
> +	struct elf64_hdr *hdr = (void*) ehdr;			\
> +	int _rc = 0;						\
> +	if (hdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> +	    (hdr->e_flags & 0x00000002) &&			\
> +	    !page_table_allocate_pgste &&			\
> +	    !current->mm->context.alloc_pgste) {		\
> +		current->mm->context.alloc_pgste = 1;		\

However, I think this is over-simplified, unless I'm mistaken.

If you set current->mm->context.alloc_pgste here, then that means that 4k
page tables will be freed when the original mm will be released, instead of
the correct 2k ones.

I think you need an additional intermediate context flag here. Something
like current->mm->context.request_pgste or whatever, no?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02 10:28       ` Heiko Carstens
@ 2017-06-02 10:48         ` Martin Schwidefsky
  0 siblings, 0 replies; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-02 10:48 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: David Hildenbrand, kvm, linux-kernel, Thomas Huth, Christian Borntraeger

On Fri, 2 Jun 2017 12:28:48 +0200
Heiko Carstens <heiko.carstens@de.ibm.com> wrote:

> On Fri, Jun 02, 2017 at 11:46:47AM +0200, Martin Schwidefsky wrote:
> > On Fri, 2 Jun 2017 09:02:10 +0200
> > Heiko Carstens <heiko.carstens@de.ibm.com> wrote:  
> > > Maybe this is a bit over-simplified, but might work.  
> > This is not over-simplified at all, that does work:  
> 
> Good!
> 
> > +struct arch_elf_state {
> > +};
> > +
> > +#define INIT_ARCH_ELF_STATE { }
> > +
> > +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state) (0)
> > +#define arch_check_elf(ehdr, interp, interp_ehdr, state)	\
> > +({								\
> > +	struct elf64_hdr *hdr = (void*) ehdr;			\
> > +	int _rc = 0;						\
> > +	if (hdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> > +	    (hdr->e_flags & 0x00000002) &&			\
> > +	    !page_table_allocate_pgste &&			\
> > +	    !current->mm->context.alloc_pgste) {		\
> > +		current->mm->context.alloc_pgste = 1;		\  
> 
> However, I think this is over-simplified, unless I'm mistaken.
> 
> If you set current->mm->context.alloc_pgste here, then that means that 4k
> page tables will be freed when the original mm will be released, instead of
> the correct 2k ones.
> 
> I think you need an additional intermediate context flag here. Something
> like current->mm->context.request_pgste or whatever, no?

Yes, the flags for the current mm and the next mm have to be different.
request_pgste is a nice name for the new flag, I'll use it.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02 10:19       ` Christian Borntraeger
@ 2017-06-02 10:53         ` Martin Schwidefsky
  2017-06-02 13:20           ` Christian Borntraeger
  0 siblings, 1 reply; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-02 10:53 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel, Thomas Huth

On Fri, 2 Jun 2017 12:19:19 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 06/02/2017 11:46 AM, Martin Schwidefsky wrote:
> > On Fri, 2 Jun 2017 09:02:10 +0200
> > Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
> >   
> >> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:  
> >>>> Unfortunately, converting all page tables to 4k pgste page tables is
> >>>> not possible without provoking various race conditions.    
> >>>
> >>> That is one approach we tried and was found to be buggy. The point is that
> >>> you are not allowed to reallocate a page table while a VMA exists that is
> >>> in the address range of that page table.
> >>>
> >>> Another approach we tried is to use an ELF flag on the qemu executable.
> >>> That does not work either because fs/exec.c allocates and populates the
> >>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
> >>> play.    
> >>
> >> How about if you would fail the system call within arch_check_elf() if you
> >> detect that the binary requires pgstes (as indicated by elf flags) and then
> >> restart the system call?
> >>
> >> That is: arch_check_elf() e.g. would set a thread flag that future mm's
> >> should be allocated with pgstes. Then do_execve() would cleanup everything
> >> and return to entry.S. Upon return to userspace we detect this condition
> >> and simply restart the system call, similar to signals vs -ERESTARTSYS.
> >>
> >> That would make do_execve() cleanup everything and upon reentering it would
> >> allocate an mm with the pgste flag set.
> >>
> >> Maybe this is a bit over-simplified, but might work.  
> > 
> > This is not over-simplified at all, that does work:  
> 
> 
> Nice. Next question is how to integrate that elf flag into the qemu
> build environment.

So far I use a small C program to set the flag:

#include <elf.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>

#define ERREXIT(...)                    \
do {                                    \
        fprintf(stderr, __VA_ARGS__);   \
        exit(-1);                       \
} while (0)

int main(int argc, char *argv[])
{
        Elf64_Ehdr ehdr;
        int fd, rc;

        if (argc != 2)
                ERREXIT("Usage: %s <elf-file>\n", argv[0]);

        fd = open(argv[1], O_RDWR);
        if (fd < 0)
                ERREXIT("Unable to open file %s\n", argv[1]);

        if (pread(fd, &ehdr, sizeof(ehdr), 0) != sizeof(ehdr) ||
            memcmp(&ehdr.e_ident, ELFMAG, SELFMAG) != 0 ||
                ehdr.e_ident[EI_CLASS] != ELFCLASS64 ||
            ehdr.e_machine != EM_S390)
                ERREXIT("Invalid ELF file %s\n", argv[1]);

        ehdr.e_flags |= 0x00000002;

        if (pwrite(fd, &ehdr, sizeof(ehdr), 0) != sizeof(ehdr))
                ERREXIT("Write to of file %s failed\n", argv[1]);

        close(fd);
        return 0;
}
 
> > --
> > diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> > index 69a77eecaec1..7bd182676ddd 100644
> > --- a/arch/s390/Kconfig
> > +++ b/arch/s390/Kconfig
> > @@ -64,6 +64,7 @@ config ARCH_SUPPORTS_UPROBES
> > 
> >  config S390
> >  	def_bool y
> > +	select ARCH_BINFMT_ELF_STATE
> >  	select ARCH_HAS_DEVMEM_IS_ALLOWED
> >  	select ARCH_HAS_ELF_RANDOMIZE
> >  	select ARCH_HAS_GCOV_PROFILE_ALL
> > diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
> > index e8f623041769..79911231f9e6 100644
> > --- a/arch/s390/include/asm/elf.h
> > +++ b/arch/s390/include/asm/elf.h
> > @@ -151,6 +151,28 @@ extern unsigned int vdso_enabled;
> >  	 && (x)->e_ident[EI_CLASS] == ELF_CLASS)
> >  #define compat_start_thread	start_thread31
> > 
> > +struct arch_elf_state {
> > +};
> > +
> > +#define INIT_ARCH_ELF_STATE { }
> > +
> > +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state) (0)
> > +#define arch_check_elf(ehdr, interp, interp_ehdr, state)	\
> > +({								\
> > +	struct elf64_hdr *hdr = (void*) ehdr;			\
> > +	int _rc = 0;						\
> > +	if (hdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> > +	    (hdr->e_flags & 0x00000002) &&			\  
> 
> shall we define some EF_S390_PGSTE flags in elf.h for this number?
> What is 0x00000001 by the way?

Yes, we have to give the bit a nice name, like EF_S390_REQUEST_PGSTE
0x00000001 is EF_S390_HIGH_GPRS

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02  7:02   ` Heiko Carstens
  2017-06-02  7:13     ` Christian Borntraeger
  2017-06-02  9:46     ` Martin Schwidefsky
@ 2017-06-02 10:54     ` David Hildenbrand
  2 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2017-06-02 10:54 UTC (permalink / raw)
  To: Heiko Carstens, Martin Schwidefsky
  Cc: kvm, linux-kernel, Thomas Huth, Christian Borntraeger

On 02.06.2017 09:02, Heiko Carstens wrote:
> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:
>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>> not possible without provoking various race conditions.
>>
>> That is one approach we tried and was found to be buggy. The point is that
>> you are not allowed to reallocate a page table while a VMA exists that is
>> in the address range of that page table.
>>
>> Another approach we tried is to use an ELF flag on the qemu executable.
>> That does not work either because fs/exec.c allocates and populates the
>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
>> play.
> 
> How about if you would fail the system call within arch_check_elf() if you
> detect that the binary requires pgstes (as indicated by elf flags) and then
> restart the system call?
> 
> That is: arch_check_elf() e.g. would set a thread flag that future mm's
> should be allocated with pgstes. Then do_execve() would cleanup everything
> and return to entry.S. Upon return to userspace we detect this condition
> and simply restart the system call, similar to signals vs -ERESTARTSYS.
> 
> That would make do_execve() cleanup everything and upon reentering it would
> allocate an mm with the pgste flag set.

Cool, I thought that we would not be able to get around having to make a
decision at fork time, but this really looks promising.

Thanks for this idea Heiko!


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02 10:53         ` Martin Schwidefsky
@ 2017-06-02 13:20           ` Christian Borntraeger
  2017-06-07 12:34             ` Martin Schwidefsky
  0 siblings, 1 reply; 30+ messages in thread
From: Christian Borntraeger @ 2017-06-02 13:20 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel, Thomas Huth

On 06/02/2017 12:53 PM, Martin Schwidefsky wrote:
> On Fri, 2 Jun 2017 12:19:19 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
>> On 06/02/2017 11:46 AM, Martin Schwidefsky wrote:
>>> On Fri, 2 Jun 2017 09:02:10 +0200
>>> Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
>>>   
>>>> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:  
>>>>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>>>>> not possible without provoking various race conditions.    
>>>>>
>>>>> That is one approach we tried and was found to be buggy. The point is that
>>>>> you are not allowed to reallocate a page table while a VMA exists that is
>>>>> in the address range of that page table.
>>>>>
>>>>> Another approach we tried is to use an ELF flag on the qemu executable.
>>>>> That does not work either because fs/exec.c allocates and populates the
>>>>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
>>>>> play.    
>>>>
>>>> How about if you would fail the system call within arch_check_elf() if you
>>>> detect that the binary requires pgstes (as indicated by elf flags) and then
>>>> restart the system call?
>>>>
>>>> That is: arch_check_elf() e.g. would set a thread flag that future mm's
>>>> should be allocated with pgstes. Then do_execve() would cleanup everything
>>>> and return to entry.S. Upon return to userspace we detect this condition
>>>> and simply restart the system call, similar to signals vs -ERESTARTSYS.
>>>>
>>>> That would make do_execve() cleanup everything and upon reentering it would
>>>> allocate an mm with the pgste flag set.
>>>>
>>>> Maybe this is a bit over-simplified, but might work.  
>>>
>>> This is not over-simplified at all, that does work:  
>>
>>
>> Nice. Next question is how to integrate that elf flag into the qemu
>> build environment.
> 
> So far I use a small C program to set the flag:
> 
> #include <elf.h>
> #include <fcntl.h>
> #include <stdio.h>
> #include <stdlib.h>
> 
> #define ERREXIT(...)                    \
> do {                                    \
>         fprintf(stderr, __VA_ARGS__);   \
>         exit(-1);                       \
> } while (0)
> 
> int main(int argc, char *argv[])
> {
>         Elf64_Ehdr ehdr;
>         int fd, rc;
> 
>         if (argc != 2)
>                 ERREXIT("Usage: %s <elf-file>\n", argv[0]);
> 
>         fd = open(argv[1], O_RDWR);
>         if (fd < 0)
>                 ERREXIT("Unable to open file %s\n", argv[1]);
> 
>         if (pread(fd, &ehdr, sizeof(ehdr), 0) != sizeof(ehdr) ||
>             memcmp(&ehdr.e_ident, ELFMAG, SELFMAG) != 0 ||
>                 ehdr.e_ident[EI_CLASS] != ELFCLASS64 ||
>             ehdr.e_machine != EM_S390)
>                 ERREXIT("Invalid ELF file %s\n", argv[1]);
> 
>         ehdr.e_flags |= 0x00000002;
> 
>         if (pwrite(fd, &ehdr, sizeof(ehdr), 0) != sizeof(ehdr))
>                 ERREXIT("Write to of file %s failed\n", argv[1]);
> 
>         close(fd);
>         return 0;
> }
> 

Thanks for that. I assume this is mostly for testing and we want to have 
toolchain support for that. Otherwise things like  build-id (the sha variant) 
might be wrong, no?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-02 13:20           ` Christian Borntraeger
@ 2017-06-07 12:34             ` Martin Schwidefsky
  2017-06-07 20:47               ` Heiko Carstens
  0 siblings, 1 reply; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-07 12:34 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Heiko Carstens, David Hildenbrand, kvm, linux-kernel,
	Thomas Huth, Andreas Krebbel

On Fri, 2 Jun 2017 15:20:33 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 06/02/2017 12:53 PM, Martin Schwidefsky wrote:
> > On Fri, 2 Jun 2017 12:19:19 +0200
> > Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> >   
> >> On 06/02/2017 11:46 AM, Martin Schwidefsky wrote:  
> >>> On Fri, 2 Jun 2017 09:02:10 +0200
> >>> Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
> >>>     
> >>>> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:    
> >>>>>> Unfortunately, converting all page tables to 4k pgste page tables is
> >>>>>> not possible without provoking various race conditions.      
> >>>>>
> >>>>> That is one approach we tried and was found to be buggy. The point is that
> >>>>> you are not allowed to reallocate a page table while a VMA exists that is
> >>>>> in the address range of that page table.
> >>>>>
> >>>>> Another approach we tried is to use an ELF flag on the qemu executable.
> >>>>> That does not work either because fs/exec.c allocates and populates the
> >>>>> new mm struct for the argument pages before fs/binfmt_elf.c comes into
> >>>>> play.      
> >>>>
> >>>> How about if you would fail the system call within arch_check_elf() if you
> >>>> detect that the binary requires pgstes (as indicated by elf flags) and then
> >>>> restart the system call?
> >>>>
> >>>> That is: arch_check_elf() e.g. would set a thread flag that future mm's
> >>>> should be allocated with pgstes. Then do_execve() would cleanup everything
> >>>> and return to entry.S. Upon return to userspace we detect this condition
> >>>> and simply restart the system call, similar to signals vs -ERESTARTSYS.
> >>>>
> >>>> That would make do_execve() cleanup everything and upon reentering it would
> >>>> allocate an mm with the pgste flag set.
> >>>>
> >>>> Maybe this is a bit over-simplified, but might work.    
> >>>
> >>> This is not over-simplified at all, that does work:    
> >>
> >>
> >> Nice. Next question is how to integrate that elf flag into the qemu
> >> build environment.  
> > 
> > So far I use a small C program to set the flag:
> > 
> > #include <elf.h>
> > #include <fcntl.h>
> > #include <stdio.h>
> > #include <stdlib.h>
> > 
> > #define ERREXIT(...)                    \
> > do {                                    \
> >         fprintf(stderr, __VA_ARGS__);   \
> >         exit(-1);                       \
> > } while (0)
> > 
> > int main(int argc, char *argv[])
> > {
> >         Elf64_Ehdr ehdr;
> >         int fd, rc;
> > 
> >         if (argc != 2)
> >                 ERREXIT("Usage: %s <elf-file>\n", argv[0]);
> > 
> >         fd = open(argv[1], O_RDWR);
> >         if (fd < 0)
> >                 ERREXIT("Unable to open file %s\n", argv[1]);
> > 
> >         if (pread(fd, &ehdr, sizeof(ehdr), 0) != sizeof(ehdr) ||
> >             memcmp(&ehdr.e_ident, ELFMAG, SELFMAG) != 0 ||
> >                 ehdr.e_ident[EI_CLASS] != ELFCLASS64 ||
> >             ehdr.e_machine != EM_S390)
> >                 ERREXIT("Invalid ELF file %s\n", argv[1]);
> > 
> >         ehdr.e_flags |= 0x00000002;
> > 
> >         if (pwrite(fd, &ehdr, sizeof(ehdr), 0) != sizeof(ehdr))
> >                 ERREXIT("Write to of file %s failed\n", argv[1]);
> > 
> >         close(fd);
> >         return 0;
> > }
> >   
> 
> Thanks for that. I assume this is mostly for testing and we want to have 
> toolchain support for that. Otherwise things like  build-id (the sha variant) 
> might be wrong, no?

Andreas Krebbel suggested to use a ELF segment type to mark the qemu binary,
similar to GNU_STACK. For the kernel part of this solution see below, now
we just need toolchain support to generate a qemu executable with the
PT_S390_REQUEST_PGSTE segment. The -z linker option comes to mind, e.g.
"-z request-pgste"

--
>From 90c15b34c07644395d0bd96a632826740c539561 Mon Sep 17 00:00:00 2001
From: Martin Schwidefsky <schwidefsky@de.ibm.com>
Date: Wed, 7 Jun 2017 14:10:24 +0200
Subject: [PATCH] s390/kvm: avoid global config of vm.alloc_pgste=1

The system control vm.alloc_pgste is used to control the size of the
page tables, either 2K or 4K. The idea is that a KVM host sets the
vm.alloc_pgste control to 1 which causes *all* new processes to run
with 4K page tables. For a non-kvm system the control should stay off
to save on memory used for page tables.

Trouble is that distributions choose to set the control globally to
be able to run KVM guests. This wastes memory on non-KVM systems.

Introduce the PT_S390_REQUEST_PGSTE ELF segment type to "mark" the
qemu executable with it. All executables with this (empty) segment in
its ELF phdr array will be started with 4K page tables. Any executable
without PT_S390_REQUEST_PGSTE will run with the default 2K page tables.

This removes the need to set vm.alloc_pgste=1 for a KVM host and
minimizes the waste of memory for page tables.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
---
 arch/s390/Kconfig                   |  1 +
 arch/s390/include/asm/elf.h         | 26 ++++++++++++++++++++++++++
 arch/s390/include/asm/mmu_context.h |  3 ++-
 arch/s390/include/asm/ptrace.h      |  2 ++
 arch/s390/include/asm/thread_info.h |  1 +
 arch/s390/kernel/entry.S            | 13 ++++++++++++-
 6 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 69a77eecaec1..7bd182676ddd 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -64,6 +64,7 @@ config ARCH_SUPPORTS_UPROBES
 
 config S390
 	def_bool y
+	select ARCH_BINFMT_ELF_STATE
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
index e8f623041769..7811d1006642 100644
--- a/arch/s390/include/asm/elf.h
+++ b/arch/s390/include/asm/elf.h
@@ -117,6 +117,9 @@
 #define ELF_DATA	ELFDATA2MSB
 #define ELF_ARCH	EM_S390
 
+/* s390 specific phdr types */
+#define PT_S390_REQUEST_PGSTE	0x70000000
+
 /*
  * ELF register definitions..
  */
@@ -151,6 +154,29 @@ extern unsigned int vdso_enabled;
 	 && (x)->e_ident[EI_CLASS] == ELF_CLASS)
 #define compat_start_thread	start_thread31
 
+struct arch_elf_state {
+};
+
+#define INIT_ARCH_ELF_STATE { }
+
+#define arch_check_elf(ehdr, interp, interp_ehdr, state) (0)
+#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)	\
+({								\
+	struct elf64_hdr *_ehdr = (void *) ehdr;		\
+	struct elf64_phdr *_phdr = (void *) phdr;		\
+	int _rc = 0;						\
+	if (_ehdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
+	    _phdr->p_type == PT_S390_REQUEST_PGSTE &&		\
+	    !page_table_allocate_pgste &&			\
+	    !test_thread_flag(TIF_REQUEST_PGSTE)) {		\
+		set_thread_flag(TIF_REQUEST_PGSTE);		\
+		set_pt_regs_flag(task_pt_regs(current),		\
+				 PIF_SYSCALL_RESTART);		\
+		_rc = -EAGAIN;					\
+	}							\
+	_rc;							\
+})
+
 /* For SVR4/S390 the function pointer to be registered with `atexit` is
    passed in R14. */
 #define ELF_PLAT_INIT(_r, load_addr) \
diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
index c119d564d8f2..1201b18e817d 100644
--- a/arch/s390/include/asm/mmu_context.h
+++ b/arch/s390/include/asm/mmu_context.h
@@ -25,7 +25,8 @@ static inline int init_new_context(struct task_struct *tsk,
 	mm->context.gmap_asce = 0;
 	mm->context.flush_mm = 0;
 #ifdef CONFIG_PGSTE
-	mm->context.alloc_pgste = page_table_allocate_pgste;
+	mm->context.alloc_pgste = page_table_allocate_pgste ||
+		test_thread_flag(TIF_REQUEST_PGSTE);
 	mm->context.has_pgste = 0;
 	mm->context.use_skey = 0;
 	mm->context.use_cmma = 0;
diff --git a/arch/s390/include/asm/ptrace.h b/arch/s390/include/asm/ptrace.h
index 99bc456cc26a..24baa80f7af6 100644
--- a/arch/s390/include/asm/ptrace.h
+++ b/arch/s390/include/asm/ptrace.h
@@ -11,9 +11,11 @@
 
 #define PIF_SYSCALL		0	/* inside a system call */
 #define PIF_PER_TRAP		1	/* deliver sigtrap on return to user */
+#define PIF_SYSCALL_RESTART	2	/* restart the current system call */
 
 #define _PIF_SYSCALL		_BITUL(PIF_SYSCALL)
 #define _PIF_PER_TRAP		_BITUL(PIF_PER_TRAP)
+#define _PIF_SYSCALL_RESTART	_BITUL(PIF_SYSCALL_RESTART)
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
index f36e6e2b73f0..e7444d76cc63 100644
--- a/arch/s390/include/asm/thread_info.h
+++ b/arch/s390/include/asm/thread_info.h
@@ -56,6 +56,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
 #define TIF_NEED_RESCHED	2	/* rescheduling necessary */
 #define TIF_UPROBE		3	/* breakpointed or single-stepping */
 #define TIF_GUARDED_STORAGE	4	/* load guarded storage control block */
+#define TIF_REQUEST_PGSTE	5	/* New mm's will use 4K page tables */
 #define TIF_SYSCALL_TRACE	8	/* syscall trace active */
 #define TIF_SYSCALL_AUDIT	9	/* syscall auditing active */
 #define TIF_SECCOMP		10	/* secure computing */
diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
index 0c2c3b8bfc9a..8c824b32527a 100644
--- a/arch/s390/kernel/entry.S
+++ b/arch/s390/kernel/entry.S
@@ -52,7 +52,7 @@ _TIF_TRACE	= (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \
 		   _TIF_SYSCALL_TRACEPOINT)
 _CIF_WORK	= (_CIF_MCCK_PENDING | _CIF_ASCE_PRIMARY | \
 		   _CIF_ASCE_SECONDARY | _CIF_FPU)
-_PIF_WORK	= (_PIF_PER_TRAP)
+_PIF_WORK	= (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART)
 
 #define BASED(name) name-cleanup_critical(%r13)
 
@@ -342,6 +342,8 @@ ENTRY(system_call)
 	jo	.Lsysc_guarded_storage
 	TSTMSK	__PT_FLAGS(%r11),_PIF_PER_TRAP
 	jo	.Lsysc_singlestep
+	TSTMSK	__PT_FLAGS(%r11),_PIF_SYSCALL_RESTART
+	jo	.Lsysc_syscall_restart
 	TSTMSK	__TI_flags(%r12),_TIF_SIGPENDING
 	jo	.Lsysc_sigpending
 	TSTMSK	__TI_flags(%r12),_TIF_NOTIFY_RESUME
@@ -434,6 +436,15 @@ ENTRY(system_call)
 	jg	do_per_trap
 
 #
+# _PIF_SYSCALL_RESTART is set, repeat the current system call
+#
+.Lsysc_syscall_restart:
+	ni	__PT_FLAGS+7(%r11),255-_PIF_SYSCALL_RESTART
+	lmg	%r1,%r7,__PT_R1(%r11)	# load svc arguments
+	lg	%r2,__PT_ORIG_GPR2(%r11)
+	j	.Lsysc_do_svc
+
+#
 # call tracehook_report_syscall_entry/tracehook_report_syscall_exit before
 # and after the system call
 #
-- 
2.11.2


-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-07 12:34             ` Martin Schwidefsky
@ 2017-06-07 20:47               ` Heiko Carstens
  2017-06-08  5:35                 ` Martin Schwidefsky
  0 siblings, 1 reply; 30+ messages in thread
From: Heiko Carstens @ 2017-06-07 20:47 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Christian Borntraeger, David Hildenbrand, kvm, linux-kernel,
	Thomas Huth, Andreas Krebbel

On Wed, Jun 07, 2017 at 02:34:40PM +0200, Martin Schwidefsky wrote:
> +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)	\
> +({								\
> +	struct elf64_hdr *_ehdr = (void *) ehdr;		\
> +	struct elf64_phdr *_phdr = (void *) phdr;		\
> +	int _rc = 0;						\
> +	if (_ehdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> +	    _phdr->p_type == PT_S390_REQUEST_PGSTE &&		\
> +	    !page_table_allocate_pgste &&			\
> +	    !test_thread_flag(TIF_REQUEST_PGSTE)) {		\
> +		set_thread_flag(TIF_REQUEST_PGSTE);		\
> +		set_pt_regs_flag(task_pt_regs(current),		\
> +				 PIF_SYSCALL_RESTART);		\
> +		_rc = -EAGAIN;					\
> +	}							\
> +	_rc;							\
> +})

I'm wondering if this should simply fail, if a PT_S390_REQUEST_PGSTE type
segment exists, but it is not ELFCLASS64?
It will fail later anyway on s390_enable_sie(), but...

> diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
> index c119d564d8f2..1201b18e817d 100644
> --- a/arch/s390/include/asm/mmu_context.h
> +++ b/arch/s390/include/asm/mmu_context.h
> @@ -25,7 +25,8 @@ static inline int init_new_context(struct task_struct *tsk,
>  	mm->context.gmap_asce = 0;
>  	mm->context.flush_mm = 0;
>  #ifdef CONFIG_PGSTE
> -	mm->context.alloc_pgste = page_table_allocate_pgste;
> +	mm->context.alloc_pgste = page_table_allocate_pgste ||
> +		test_thread_flag(TIF_REQUEST_PGSTE);

I think the alloc_pgste flag should be inherited on fork, no?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-07 20:47               ` Heiko Carstens
@ 2017-06-08  5:35                 ` Martin Schwidefsky
  2017-06-08  6:25                   ` Heiko Carstens
  0 siblings, 1 reply; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-08  5:35 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Christian Borntraeger, David Hildenbrand, kvm, linux-kernel,
	Thomas Huth, Andreas Krebbel

On Wed, 7 Jun 2017 22:47:56 +0200
Heiko Carstens <heiko.carstens@de.ibm.com> wrote:

> On Wed, Jun 07, 2017 at 02:34:40PM +0200, Martin Schwidefsky wrote:
> > +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)	\
> > +({								\
> > +	struct elf64_hdr *_ehdr = (void *) ehdr;		\
> > +	struct elf64_phdr *_phdr = (void *) phdr;		\
> > +	int _rc = 0;						\
> > +	if (_ehdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> > +	    _phdr->p_type == PT_S390_REQUEST_PGSTE &&		\
> > +	    !page_table_allocate_pgste &&			\
> > +	    !test_thread_flag(TIF_REQUEST_PGSTE)) {		\
> > +		set_thread_flag(TIF_REQUEST_PGSTE);		\
> > +		set_pt_regs_flag(task_pt_regs(current),		\
> > +				 PIF_SYSCALL_RESTART);		\
> > +		_rc = -EAGAIN;					\
> > +	}							\
> > +	_rc;							\
> > +})  
> 
> I'm wondering if this should simply fail, if a PT_S390_REQUEST_PGSTE type
> segment exists, but it is not ELFCLASS64?
> It will fail later anyway on s390_enable_sie(), but...

Does it matter if it fails for a 32-bit ELF file? Just makes the code more
complex without benefit, no?

> > diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
> > index c119d564d8f2..1201b18e817d 100644
> > --- a/arch/s390/include/asm/mmu_context.h
> > +++ b/arch/s390/include/asm/mmu_context.h
> > @@ -25,7 +25,8 @@ static inline int init_new_context(struct task_struct *tsk,
> >  	mm->context.gmap_asce = 0;
> >  	mm->context.flush_mm = 0;
> >  #ifdef CONFIG_PGSTE
> > -	mm->context.alloc_pgste = page_table_allocate_pgste;
> > +	mm->context.alloc_pgste = page_table_allocate_pgste ||
> > +		test_thread_flag(TIF_REQUEST_PGSTE);  
> 
> I think the alloc_pgste flag should be inherited on fork, no?

Yes, that makes it more consistent. I'll add it.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-08  5:35                 ` Martin Schwidefsky
@ 2017-06-08  6:25                   ` Heiko Carstens
  2017-06-08 11:24                     ` Martin Schwidefsky
  0 siblings, 1 reply; 30+ messages in thread
From: Heiko Carstens @ 2017-06-08  6:25 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Christian Borntraeger, David Hildenbrand, kvm, linux-kernel,
	Thomas Huth, Andreas Krebbel

On Thu, Jun 08, 2017 at 07:35:28AM +0200, Martin Schwidefsky wrote:
> On Wed, 7 Jun 2017 22:47:56 +0200
> Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
> > On Wed, Jun 07, 2017 at 02:34:40PM +0200, Martin Schwidefsky wrote:
> > > +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)	\
> > > +({								\
> > > +	struct elf64_hdr *_ehdr = (void *) ehdr;		\
> > > +	struct elf64_phdr *_phdr = (void *) phdr;		\
> > > +	int _rc = 0;						\
> > > +	if (_ehdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> > > +	    _phdr->p_type == PT_S390_REQUEST_PGSTE &&		\
> > > +	    !page_table_allocate_pgste &&			\
> > > +	    !test_thread_flag(TIF_REQUEST_PGSTE)) {		\
> > > +		set_thread_flag(TIF_REQUEST_PGSTE);		\
> > > +		set_pt_regs_flag(task_pt_regs(current),		\
> > > +				 PIF_SYSCALL_RESTART);		\
> > > +		_rc = -EAGAIN;					\
> > > +	}							\
> > > +	_rc;							\
> > > +})  
> > 
> > I'm wondering if this should simply fail, if a PT_S390_REQUEST_PGSTE type
> > segment exists, but it is not ELFCLASS64?
> > It will fail later anyway on s390_enable_sie(), but...
> 
> Does it matter if it fails for a 32-bit ELF file? Just makes the code more
> complex without benefit, no?

It would be more consistent, since right now a 32-bit ELF file with
PT_S390_REQUEST_PGSTE will be exectuted, but the page tables won't have any
pgstes. That's sort of odd, isn't it? And that later on it won't be able to
create a virtual machine because our current implementation doesn't allow
that for compat tasks is sort of unrelated.
But anyway, I'll leave that up to you, it doesn't really matter.

> 
> > > diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
> > > index c119d564d8f2..1201b18e817d 100644
> > > --- a/arch/s390/include/asm/mmu_context.h
> > > +++ b/arch/s390/include/asm/mmu_context.h
> > > @@ -25,7 +25,8 @@ static inline int init_new_context(struct task_struct *tsk,
> > >  	mm->context.gmap_asce = 0;
> > >  	mm->context.flush_mm = 0;
> > >  #ifdef CONFIG_PGSTE
> > > -	mm->context.alloc_pgste = page_table_allocate_pgste;
> > > +	mm->context.alloc_pgste = page_table_allocate_pgste ||
> > > +		test_thread_flag(TIF_REQUEST_PGSTE);  
> > 
> > I think the alloc_pgste flag should be inherited on fork, no?
> 
> Yes, that makes it more consistent. I'll add it.

By the way, what prevents with the _current_ code a scenario like:

- set allocate_pgste sysctl to 1
- create kvm guest
- s390_enable_sie
- run vcpu
- set allocate_pgste sysctl to 0
- clone(... CLONE_FILES ...) (that is: new mm without pgstes, but shared fds)
- [child] run vcpu

Is there anything that makes sure we cannot execute the sie instruction in
the child process?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-08  6:25                   ` Heiko Carstens
@ 2017-06-08 11:24                     ` Martin Schwidefsky
  2017-06-08 13:17                       ` Heiko Carstens
  0 siblings, 1 reply; 30+ messages in thread
From: Martin Schwidefsky @ 2017-06-08 11:24 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Christian Borntraeger, David Hildenbrand, kvm, linux-kernel,
	Thomas Huth, Andreas Krebbel

On Thu, 8 Jun 2017 08:25:31 +0200
Heiko Carstens <heiko.carstens@de.ibm.com> wrote:

> On Thu, Jun 08, 2017 at 07:35:28AM +0200, Martin Schwidefsky wrote:
> > On Wed, 7 Jun 2017 22:47:56 +0200
> > Heiko Carstens <heiko.carstens@de.ibm.com> wrote:  
> > > On Wed, Jun 07, 2017 at 02:34:40PM +0200, Martin Schwidefsky wrote:  
> > > > +#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)	\
> > > > +({								\
> > > > +	struct elf64_hdr *_ehdr = (void *) ehdr;		\
> > > > +	struct elf64_phdr *_phdr = (void *) phdr;		\
> > > > +	int _rc = 0;						\
> > > > +	if (_ehdr->e_ident[EI_CLASS] == ELFCLASS64 &&		\
> > > > +	    _phdr->p_type == PT_S390_REQUEST_PGSTE &&		\
> > > > +	    !page_table_allocate_pgste &&			\
> > > > +	    !test_thread_flag(TIF_REQUEST_PGSTE)) {		\
> > > > +		set_thread_flag(TIF_REQUEST_PGSTE);		\
> > > > +		set_pt_regs_flag(task_pt_regs(current),		\
> > > > +				 PIF_SYSCALL_RESTART);		\
> > > > +		_rc = -EAGAIN;					\
> > > > +	}							\
> > > > +	_rc;							\
> > > > +})    
> > > 
> > > I'm wondering if this should simply fail, if a PT_S390_REQUEST_PGSTE type
> > > segment exists, but it is not ELFCLASS64?
> > > It will fail later anyway on s390_enable_sie(), but...  
> > 
> > Does it matter if it fails for a 32-bit ELF file? Just makes the code more
> > complex without benefit, no?  
> 
> It would be more consistent, since right now a 32-bit ELF file with
> PT_S390_REQUEST_PGSTE will be exectuted, but the page tables won't have any
> pgstes. That's sort of odd, isn't it? And that later on it won't be able to
> create a virtual machine because our current implementation doesn't allow
> that for compat tasks is sort of unrelated.
> But anyway, I'll leave that up to you, it doesn't really matter.

Actually the code will be less complex if we add PT_S390_REQUEST_PGSTE for
32-bit ELF files as well. It does not make sense to define the segment for
a compat process as KVM won't work but you get what you ask for..

This looks like this:

#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)        \
({                                                              \
        int _rc = 0;                                            \
        if (phdr->p_type == PT_S390_REQUEST_PGSTE &&            \
            !page_table_allocate_pgste &&                       \
            !test_thread_flag(TIF_REQUEST_PGSTE)) {             \
                set_thread_flag(TIF_REQUEST_PGSTE);             \
                set_pt_regs_flag(task_pt_regs(current),         \
                                 PIF_SYSCALL_RESTART);          \
                _rc = -EAGAIN;                                  \
        }                                                       \
        _rc;                                                    \
})

phdr is a (struct elf_phd *) which is either define to a a (struct elf64_phdr *)
or a (struct elf32_phdr *). The check works in both cases.
 
> >   
> > > > diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
> > > > index c119d564d8f2..1201b18e817d 100644
> > > > --- a/arch/s390/include/asm/mmu_context.h
> > > > +++ b/arch/s390/include/asm/mmu_context.h
> > > > @@ -25,7 +25,8 @@ static inline int init_new_context(struct task_struct *tsk,
> > > >  	mm->context.gmap_asce = 0;
> > > >  	mm->context.flush_mm = 0;
> > > >  #ifdef CONFIG_PGSTE
> > > > -	mm->context.alloc_pgste = page_table_allocate_pgste;
> > > > +	mm->context.alloc_pgste = page_table_allocate_pgste ||
> > > > +		test_thread_flag(TIF_REQUEST_PGSTE);    
> > > 
> > > I think the alloc_pgste flag should be inherited on fork, no?  
> > 
> > Yes, that makes it more consistent. I'll add it.  
> 
> By the way, what prevents with the _current_ code a scenario like:
> 
> - set allocate_pgste sysctl to 1
> - create kvm guest
> - s390_enable_sie
> - run vcpu
> - set allocate_pgste sysctl to 0
> - clone(... CLONE_FILES ...) (that is: new mm without pgstes, but shared fds)
> - [child] run vcpu
> 
> Is there anything that makes sure we cannot execute the sie instruction in
> the child process?

Yes, that looks like a loop-hole.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste
  2017-06-08 11:24                     ` Martin Schwidefsky
@ 2017-06-08 13:17                       ` Heiko Carstens
  0 siblings, 0 replies; 30+ messages in thread
From: Heiko Carstens @ 2017-06-08 13:17 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Christian Borntraeger, David Hildenbrand, kvm, linux-kernel,
	Thomas Huth, Andreas Krebbel

On Thu, Jun 08, 2017 at 01:24:01PM +0200, Martin Schwidefsky wrote:
> On Thu, 8 Jun 2017 08:25:31 +0200
> Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
> > It would be more consistent, since right now a 32-bit ELF file with
> > PT_S390_REQUEST_PGSTE will be exectuted, but the page tables won't have any
> > pgstes. That's sort of odd, isn't it? And that later on it won't be able to
> > create a virtual machine because our current implementation doesn't allow
> > that for compat tasks is sort of unrelated.
> > But anyway, I'll leave that up to you, it doesn't really matter.
> 
> Actually the code will be less complex if we add PT_S390_REQUEST_PGSTE for
> 32-bit ELF files as well. It does not make sense to define the segment for
> a compat process as KVM won't work but you get what you ask for..
> 
> This looks like this:
> 
> #define arch_elf_pt_proc(ehdr, phdr, elf, interp, state)        \
> ({                                                              \
>         int _rc = 0;                                            \
>         if (phdr->p_type == PT_S390_REQUEST_PGSTE &&            \
>             !page_table_allocate_pgste &&                       \
>             !test_thread_flag(TIF_REQUEST_PGSTE)) {             \
>                 set_thread_flag(TIF_REQUEST_PGSTE);             \
>                 set_pt_regs_flag(task_pt_regs(current),         \
>                                  PIF_SYSCALL_RESTART);          \
>                 _rc = -EAGAIN;                                  \
>         }                                                       \
>         _rc;                                                    \
> })
> 
> phdr is a (struct elf_phd *) which is either define to a a (struct elf64_phdr *)
> or a (struct elf32_phdr *). The check works in both cases.

Yes, that makes sense.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-06-08 13:17 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-29 16:32 [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste David Hildenbrand
2017-05-29 16:32 ` [PATCH RFC 1/2] s390x: mm: allow mixed page table types (2k and 4k) David Hildenbrand
2017-06-01 11:39   ` Christian Borntraeger
2017-06-01 12:44     ` David Hildenbrand
2017-06-01 12:59   ` David Hildenbrand
2017-06-02  7:11     ` Christian Borntraeger
2017-05-29 16:32 ` [PATCH RFC 2/2] KVM: s390: Introduce KVM_VM_S390_LATE_MMAP David Hildenbrand
2017-06-01 10:46 ` [PATCH RFC 0/2] KVM: s390: avoid having to enable vm.alloc_pgste Martin Schwidefsky
2017-06-01 11:24   ` Christian Borntraeger
2017-06-01 11:27   ` David Hildenbrand
2017-06-02  7:06     ` Heiko Carstens
2017-06-02  7:02   ` Heiko Carstens
2017-06-02  7:13     ` Christian Borntraeger
2017-06-02  7:16       ` Martin Schwidefsky
2017-06-02  7:18         ` Christian Borntraeger
2017-06-02  7:25           ` Christian Borntraeger
2017-06-02  8:11             ` Martin Schwidefsky
2017-06-02  9:46     ` Martin Schwidefsky
2017-06-02 10:19       ` Christian Borntraeger
2017-06-02 10:53         ` Martin Schwidefsky
2017-06-02 13:20           ` Christian Borntraeger
2017-06-07 12:34             ` Martin Schwidefsky
2017-06-07 20:47               ` Heiko Carstens
2017-06-08  5:35                 ` Martin Schwidefsky
2017-06-08  6:25                   ` Heiko Carstens
2017-06-08 11:24                     ` Martin Schwidefsky
2017-06-08 13:17                       ` Heiko Carstens
2017-06-02 10:28       ` Heiko Carstens
2017-06-02 10:48         ` Martin Schwidefsky
2017-06-02 10:54     ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.