linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/35] KVM: s390: Add support for protected VMs
@ 2020-02-07 11:39 Christian Borntraeger
  2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
                   ` (6 more replies)
  0 siblings, 7 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton


Upfront: This series contains a "pretty small" common code memory
management change that will allow paging, guest backing with files etc
almost just like normal VMs. It should be a no-op for all architectures
not opting in. And it should be usable for others that also try to get
notified on "the pages are in the process of being used for things like
I/O"

I CCed linux-mm (and Andrew as mm maintainer and Andrea as he was
involved in some design discussions) on the first patch (common code
mm). I also added the CC to some other patches that make use of this
infrastructure or are dealing with arch-specific memory management.

The full patch queue is on the linux-s390 and kvm mailing list.  It
would be good to get an ACK for this patch. I can then carry that via
the s390 tree.

Overview
--------
Protected VMs (PVM) are KVM VMs, where KVM can't access the VM's state
like guest memory and guest registers anymore. Instead the PVMs are
mostly managed by a new entity called Ultravisor (UV), which provides
an API, so KVM and the PV can request management actions.

PVMs are encrypted at rest and protected from hypervisor access while
running. They switch from a normal operation into protected mode, so
we can still use the standard boot process to load a encrypted blob
and then move it into protected mode.

Rebooting is only possible by passing through the unprotected/normal
mode and switching to protected again.

All patches are in the protvirtv3 branch of the korg s390 kvm git
https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git/log/?h=protvirtv3

Claudio presented the technology at his presentation at KVM Forum
2019.

https://static.sched.com/hosted_files/kvmforum2019/3b/ibm_protected_vms_s390x.pdf


RFCv2 -> v1 (you can diff the protvirtv2 and the protvirtv3 branch)
- tons of review feedback integrated (see mail thread)
- memory management now complete and working
- Documentation patches merged
- interrupt patches merged
- CONFIG_KVM_S390_PROTECTED_VIRTUALIZATION_HOST removed
- SIDA interface integrated into memop
- for merged patches I removed reviews that were not in all patches

Christian Borntraeger (3):
  KVM: s390/mm: Make pages accessible before destroying the guest
  KVM: s390: protvirt: Add SCLP interrupt handling
  KVM: s390: protvirt: do not inject interrupts after start

Claudio Imbrenda (3):
  mm:gup/writeback: add callbacks for inaccessible pages
  s390/mm: provide memory management functions for protected KVM guests
  KVM: s390/mm: handle guest unpin events

Janosch Frank (23):
  KVM: s390: add new variants of UV CALL
  KVM: s390: protvirt: Add initial lifecycle handling
  KVM: s390: protvirt: Add KVM api documentation
  KVM: s390: protvirt: Secure memory is not mergeable
  KVM: s390: protvirt: Handle SE notification interceptions
  KVM: s390: protvirt: Instruction emulation
  KVM: s390: protvirt: Handle spec exception loops
  KVM: s390: protvirt: Add new gprs location handling
  KVM: S390: protvirt: Introduce instruction data area bounce buffer
  KVM: s390: protvirt: handle secure guest prefix pages
  KVM: s390: protvirt: Write sthyi data to instruction data area
  KVM: s390: protvirt: STSI handling
  KVM: s390: protvirt: disallow one_reg
  KVM: s390: protvirt: Only sync fmt4 registers
  KVM: s390: protvirt: Add program exception injection
  KVM: s390: protvirt: Add diag 308 subcode 8 - 10 handling
  KVM: s390: protvirt: UV calls diag308 0, 1
  KVM: s390: protvirt: Report CPU state to Ultravisor
  KVM: s390: protvirt: Support cmd 5 operation state
  KVM: s390: protvirt: Add UV debug trace
  KVM: s390: protvirt: Mask PSW interrupt bits for interception 104 and
    112
  KVM: s390: protvirt: Add UV cpu reset calls
  DOCUMENTATION: Protected virtual machine introduction and IPL

Michael Mueller (2):
  KVM: s390: protvirt: Add interruption injection controls
  KVM: s390: protvirt: Implement interruption injection

Ulrich Weigand (1):
  KVM: s390/interrupt: do not pin adapter interrupt pages

Vasily Gorbik (3):
  s390/protvirt: introduce host side setup
  s390/protvirt: add ultravisor initialization
  s390/mm: add (non)secure page access exceptions handlers

 .../admin-guide/kernel-parameters.txt         |   5 +
 Documentation/virt/kvm/api.txt                |  67 ++-
 Documentation/virt/kvm/index.rst              |   2 +
 Documentation/virt/kvm/s390-pv-boot.rst       |  79 +++
 Documentation/virt/kvm/s390-pv.rst            | 116 +++++
 MAINTAINERS                                   |   1 +
 arch/s390/boot/Makefile                       |   2 +-
 arch/s390/boot/uv.c                           |  21 +-
 arch/s390/include/asm/gmap.h                  |   3 +
 arch/s390/include/asm/kvm_host.h              | 114 ++++-
 arch/s390/include/asm/mmu.h                   |   2 +
 arch/s390/include/asm/mmu_context.h           |   1 +
 arch/s390/include/asm/page.h                  |   5 +
 arch/s390/include/asm/pgtable.h               |  35 +-
 arch/s390/include/asm/uv.h                    | 267 +++++++++-
 arch/s390/kernel/Makefile                     |   1 +
 arch/s390/kernel/pgm_check.S                  |   4 +-
 arch/s390/kernel/setup.c                      |   7 +-
 arch/s390/kernel/uv.c                         | 274 ++++++++++
 arch/s390/kvm/Makefile                        |   2 +-
 arch/s390/kvm/diag.c                          |   1 +
 arch/s390/kvm/intercept.c                     | 109 +++-
 arch/s390/kvm/interrupt.c                     | 371 +++++++++++---
 arch/s390/kvm/kvm-s390.c                      | 477 ++++++++++++++++--
 arch/s390/kvm/kvm-s390.h                      |  39 ++
 arch/s390/kvm/priv.c                          |  11 +-
 arch/s390/kvm/pv.c                            | 292 +++++++++++
 arch/s390/mm/fault.c                          |  86 ++++
 arch/s390/mm/gmap.c                           |  65 ++-
 include/linux/gfp.h                           |   6 +
 include/uapi/linux/kvm.h                      |  42 +-
 mm/gup.c                                      |   2 +
 mm/page-writeback.c                           |   1 +
 33 files changed, 2325 insertions(+), 185 deletions(-)
 create mode 100644 Documentation/virt/kvm/s390-pv-boot.rst
 create mode 100644 Documentation/virt/kvm/s390-pv.rst
 create mode 100644 arch/s390/kernel/uv.c
 create mode 100644 arch/s390/kvm/pv.c

-- 
2.24.0



^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-10 17:27   ` Christian Borntraeger
                     ` (2 more replies)
  2020-02-07 11:39 ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages Christian Borntraeger
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

From: Claudio Imbrenda <imbrenda@linux.ibm.com>

With the introduction of protected KVM guests on s390 there is now a
concept of inaccessible pages. These pages need to be made accessible
before the host can access them.

While cpu accesses will trigger a fault that can be resolved, I/O
accesses will just fail.  We need to add a callback into architecture
code for places that will do I/O, namely when writeback is started or
when a page reference is taken.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 include/linux/gfp.h | 6 ++++++
 mm/gup.c            | 2 ++
 mm/page-writeback.c | 1 +
 3 files changed, 9 insertions(+)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index e5b817cb86e7..be2754841369 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -485,6 +485,12 @@ static inline void arch_free_page(struct page *page, int order) { }
 #ifndef HAVE_ARCH_ALLOC_PAGE
 static inline void arch_alloc_page(struct page *page, int order) { }
 #endif
+#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
+static inline int arch_make_page_accessible(struct page *page)
+{
+	return 0;
+}
+#endif
 
 struct page *
 __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
diff --git a/mm/gup.c b/mm/gup.c
index 7646bf993b25..a01262cd2821 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -257,6 +257,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 			page = ERR_PTR(-ENOMEM);
 			goto out;
 		}
+		arch_make_page_accessible(page);
 	}
 	if (flags & FOLL_TOUCH) {
 		if ((flags & FOLL_WRITE) &&
@@ -1870,6 +1871,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 
 		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 
+		arch_make_page_accessible(page);
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		(*nr)++;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2caf780a42e7..0f0bd14571b1 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2806,6 +2806,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
 		inc_lruvec_page_state(page, NR_WRITEBACK);
 		inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
 	}
+	arch_make_page_accessible(page);
 	unlock_page_memcg(page);
 	return ret;
 
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
  2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-10 12:26   ` David Hildenbrand
  2020-02-10 12:40   ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages David Hildenbrand
  2020-02-07 11:39 ` [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests Christian Borntraeger
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>

The adapter interrupt page containing the indicator bits is currently
pinned. That means that a guest with many devices can pin a lot of
memory pages in the host. This also complicates the reference tracking
which is needed for memory management handling of protected virtual
machines.
We can reuse the pte notifiers to "cache" the page without pinning it.

Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
[borntraeger@de.ibm.com: patch merging, splitting, fixing]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/include/asm/kvm_host.h |   4 +-
 arch/s390/kvm/interrupt.c        | 155 +++++++++++++++++++++++--------
 arch/s390/kvm/kvm-s390.c         |   4 +
 arch/s390/kvm/kvm-s390.h         |   2 +
 4 files changed, 123 insertions(+), 42 deletions(-)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 73044545ecac..884503e05424 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -701,9 +701,9 @@ struct s390_io_adapter {
 	bool masked;
 	bool swap;
 	bool suppressible;
-	struct rw_semaphore maps_lock;
+	spinlock_t maps_lock;
 	struct list_head maps;
-	atomic_t nr_maps;
+	int nr_maps;
 };
 
 #define MAX_S390_IO_ADAPTERS ((MAX_ISC + 1) * 8)
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index c06c89d370a7..4bfb2f8fe57c 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -28,6 +28,7 @@
 #include <asm/switch_to.h>
 #include <asm/nmi.h>
 #include <asm/airq.h>
+#include <linux/pagemap.h>
 #include "kvm-s390.h"
 #include "gaccess.h"
 #include "trace-s390.h"
@@ -2328,8 +2329,8 @@ static int register_io_adapter(struct kvm_device *dev,
 		return -ENOMEM;
 
 	INIT_LIST_HEAD(&adapter->maps);
-	init_rwsem(&adapter->maps_lock);
-	atomic_set(&adapter->nr_maps, 0);
+	spin_lock_init(&adapter->maps_lock);
+	adapter->nr_maps = 0;
 	adapter->id = adapter_info.id;
 	adapter->isc = adapter_info.isc;
 	adapter->maskable = adapter_info.maskable;
@@ -2375,19 +2376,15 @@ static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
 		ret = -EFAULT;
 		goto out;
 	}
-	ret = get_user_pages_fast(map->addr, 1, FOLL_WRITE, &map->page);
-	if (ret < 0)
-		goto out;
-	BUG_ON(ret != 1);
-	down_write(&adapter->maps_lock);
-	if (atomic_inc_return(&adapter->nr_maps) < MAX_S390_ADAPTER_MAPS) {
+	spin_lock(&adapter->maps_lock);
+	if (adapter->nr_maps < MAX_S390_ADAPTER_MAPS) {
+		adapter->nr_maps++;
 		list_add_tail(&map->list, &adapter->maps);
 		ret = 0;
 	} else {
-		put_page(map->page);
 		ret = -EINVAL;
 	}
-	up_write(&adapter->maps_lock);
+	spin_unlock(&adapter->maps_lock);
 out:
 	if (ret)
 		kfree(map);
@@ -2403,18 +2400,17 @@ static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
 	if (!adapter || !addr)
 		return -EINVAL;
 
-	down_write(&adapter->maps_lock);
+	spin_lock(&adapter->maps_lock);
 	list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
 		if (map->guest_addr == addr) {
 			found = 1;
-			atomic_dec(&adapter->nr_maps);
+			adapter->nr_maps--;
 			list_del(&map->list);
-			put_page(map->page);
 			kfree(map);
 			break;
 		}
 	}
-	up_write(&adapter->maps_lock);
+	spin_unlock(&adapter->maps_lock);
 
 	return found ? 0 : -EINVAL;
 }
@@ -2430,7 +2426,6 @@ void kvm_s390_destroy_adapters(struct kvm *kvm)
 		list_for_each_entry_safe(map, tmp,
 					 &kvm->arch.adapters[i]->maps, list) {
 			list_del(&map->list);
-			put_page(map->page);
 			kfree(map);
 		}
 		kfree(kvm->arch.adapters[i]);
@@ -2690,6 +2685,31 @@ struct kvm_device_ops kvm_flic_ops = {
 	.destroy = flic_destroy,
 };
 
+void kvm_s390_adapter_gmap_notifier(struct gmap *gmap, unsigned long start,
+				    unsigned long end)
+{
+	struct kvm *kvm = gmap->private;
+	struct s390_map_info *map, *tmp;
+	int i;
+
+	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
+		struct s390_io_adapter *adapter = kvm->arch.adapters[i];
+
+		if (!adapter)
+			continue;
+		spin_lock(&adapter->maps_lock);
+		list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
+			if (start <= map->guest_addr && map->guest_addr < end) {
+				if (IS_ERR(map->page))
+					map->page = ERR_PTR(-EAGAIN);
+				else
+					map->page = NULL;
+			}
+		}
+		spin_unlock(&adapter->maps_lock);
+	}
+}
+
 static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
 {
 	unsigned long bit;
@@ -2699,19 +2719,71 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
 	return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
 }
 
-static struct s390_map_info *get_map_info(struct s390_io_adapter *adapter,
-					  u64 addr)
+static struct page *get_map_page(struct kvm *kvm,
+				 struct s390_io_adapter *adapter,
+				 u64 addr)
 {
 	struct s390_map_info *map;
+	unsigned long uaddr;
+	struct page *page;
+	bool need_retry;
+	int ret;
 
 	if (!adapter)
 		return NULL;
+retry:
+	page = NULL;
+	uaddr = 0;
+	spin_lock(&adapter->maps_lock);
+	list_for_each_entry(map, &adapter->maps, list)
+		if (map->guest_addr == addr) {
+			uaddr = map->addr;
+			page = map->page;
+			if (!page)
+				map->page = ERR_PTR(-EBUSY);
+			else if (IS_ERR(page) || !page_cache_get_speculative(page)) {
+				spin_unlock(&adapter->maps_lock);
+				goto retry;
+			}
+			break;
+		}
+	spin_unlock(&adapter->maps_lock);
+
+	if (page)
+		return page;
+	if (!uaddr)
+		return NULL;
 
-	list_for_each_entry(map, &adapter->maps, list) {
-		if (map->guest_addr == addr)
-			return map;
+	down_read(&kvm->mm->mmap_sem);
+	ret = set_pgste_bits(kvm->mm, uaddr, PGSTE_IN_BIT, PGSTE_IN_BIT);
+	if (ret)
+		goto fail;
+	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
+				    &page, NULL, NULL);
+	if (ret < 1)
+		page = NULL;
+fail:
+	up_read(&kvm->mm->mmap_sem);
+	need_retry = true;
+	spin_lock(&adapter->maps_lock);
+	list_for_each_entry(map, &adapter->maps, list)
+		if (map->guest_addr == addr) {
+			if (map->page == ERR_PTR(-EBUSY)) {
+				map->page = page;
+				need_retry = false;
+			} else if (IS_ERR(map->page)) {
+				map->page = NULL;
+			}
+			break;
+		}
+	spin_unlock(&adapter->maps_lock);
+	if (need_retry) {
+		if (page)
+			put_page(page);
+		goto retry;
 	}
-	return NULL;
+
+	return page;
 }
 
 static int adapter_indicators_set(struct kvm *kvm,
@@ -2720,30 +2792,35 @@ static int adapter_indicators_set(struct kvm *kvm,
 {
 	unsigned long bit;
 	int summary_set, idx;
-	struct s390_map_info *info;
+	struct page *ind_page, *summary_page;
 	void *map;
 
-	info = get_map_info(adapter, adapter_int->ind_addr);
-	if (!info)
+	ind_page = get_map_page(kvm, adapter, adapter_int->ind_addr);
+	if (!ind_page)
 		return -1;
-	map = page_address(info->page);
-	bit = get_ind_bit(info->addr, adapter_int->ind_offset, adapter->swap);
-	set_bit(bit, map);
-	idx = srcu_read_lock(&kvm->srcu);
-	mark_page_dirty(kvm, info->guest_addr >> PAGE_SHIFT);
-	set_page_dirty_lock(info->page);
-	info = get_map_info(adapter, adapter_int->summary_addr);
-	if (!info) {
-		srcu_read_unlock(&kvm->srcu, idx);
+	summary_page = get_map_page(kvm, adapter, adapter_int->summary_addr);
+	if (!summary_page) {
+		put_page(ind_page);
 		return -1;
 	}
-	map = page_address(info->page);
-	bit = get_ind_bit(info->addr, adapter_int->summary_offset,
-			  adapter->swap);
+
+	idx = srcu_read_lock(&kvm->srcu);
+	map = page_address(ind_page);
+	bit = get_ind_bit(adapter_int->ind_addr,
+			  adapter_int->ind_offset, adapter->swap);
+	set_bit(bit, map);
+	mark_page_dirty(kvm, adapter_int->ind_addr >> PAGE_SHIFT);
+	set_page_dirty_lock(ind_page);
+	map = page_address(summary_page);
+	bit = get_ind_bit(adapter_int->summary_addr,
+			  adapter_int->summary_offset, adapter->swap);
 	summary_set = test_and_set_bit(bit, map);
-	mark_page_dirty(kvm, info->guest_addr >> PAGE_SHIFT);
-	set_page_dirty_lock(info->page);
+	mark_page_dirty(kvm, adapter_int->summary_addr >> PAGE_SHIFT);
+	set_page_dirty_lock(summary_page);
 	srcu_read_unlock(&kvm->srcu, idx);
+
+	put_page(ind_page);
+	put_page(summary_page);
 	return summary_set ? 0 : 1;
 }
 
@@ -2765,9 +2842,7 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
 	adapter = get_io_adapter(kvm, e->adapter.adapter_id);
 	if (!adapter)
 		return -1;
-	down_read(&adapter->maps_lock);
 	ret = adapter_indicators_set(kvm, adapter, &e->adapter);
-	up_read(&adapter->maps_lock);
 	if ((ret > 0) && !adapter->masked) {
 		ret = kvm_s390_inject_airq(kvm, adapter);
 		if (ret == 0)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index e39f6ef97b09..1a48214ac507 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -219,6 +219,7 @@ static struct kvm_s390_vm_cpu_subfunc kvm_s390_available_subfunc;
 
 static struct gmap_notifier gmap_notifier;
 static struct gmap_notifier vsie_gmap_notifier;
+static struct gmap_notifier adapter_gmap_notifier;
 debug_info_t *kvm_s390_dbf;
 
 /* Section: not file related */
@@ -299,6 +300,8 @@ int kvm_arch_hardware_setup(void)
 	gmap_register_pte_notifier(&gmap_notifier);
 	vsie_gmap_notifier.notifier_call = kvm_s390_vsie_gmap_notifier;
 	gmap_register_pte_notifier(&vsie_gmap_notifier);
+	adapter_gmap_notifier.notifier_call = kvm_s390_adapter_gmap_notifier;
+	gmap_register_pte_notifier(&adapter_gmap_notifier);
 	atomic_notifier_chain_register(&s390_epoch_delta_notifier,
 				       &kvm_clock_notifier);
 	return 0;
@@ -308,6 +311,7 @@ void kvm_arch_hardware_unsetup(void)
 {
 	gmap_unregister_pte_notifier(&gmap_notifier);
 	gmap_unregister_pte_notifier(&vsie_gmap_notifier);
+	gmap_unregister_pte_notifier(&adapter_gmap_notifier);
 	atomic_notifier_chain_unregister(&s390_epoch_delta_notifier,
 					 &kvm_clock_notifier);
 }
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 6d9448dbd052..54c5eb4b275d 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -367,6 +367,8 @@ int s390int_to_s390irq(struct kvm_s390_interrupt *s390int,
 			struct kvm_s390_irq *s390irq);
 
 /* implemented in interrupt.c */
+void kvm_s390_adapter_gmap_notifier(struct gmap *gmap, unsigned long start,
+				    unsigned long end);
 int kvm_s390_vcpu_has_irq(struct kvm_vcpu *vcpu, int exclude_stop);
 int psw_extint_disabled(struct kvm_vcpu *vcpu);
 void kvm_s390_destroy_adapters(struct kvm *kvm);
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
  2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
  2020-02-07 11:39 ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-12 13:42   ` Cornelia Huck
  2020-02-14 17:59   ` David Hildenbrand
  2020-02-07 11:39 ` [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers Christian Borntraeger
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

From: Claudio Imbrenda <imbrenda@linux.ibm.com>

This provides the basic ultravisor calls and page table handling to cope
with secure guests:
- provide arch_make_page_accessible
- make pages accessible after unmapping of secure guests
- provide the ultravisor commands convert to/from secure
- provide the ultravisor commands pin/unpin shared
- provide callbacks to make pages secure (inacccessible)
 - we check for the expected pin count to only make pages secure if the
   host is not accessing them
 - we fence hugetlbfs for secure pages

Co-developed-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
[borntraeger@de.ibm.com: patch merging, splitting, fixing]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/include/asm/gmap.h        |   2 +
 arch/s390/include/asm/mmu.h         |   2 +
 arch/s390/include/asm/mmu_context.h |   1 +
 arch/s390/include/asm/page.h        |   5 +
 arch/s390/include/asm/pgtable.h     |  34 +++++-
 arch/s390/include/asm/uv.h          |  52 +++++++++
 arch/s390/kernel/uv.c               | 172 ++++++++++++++++++++++++++++
 7 files changed, 263 insertions(+), 5 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 37f96b6f0e61..e2d2f48c5c7c 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -9,6 +9,7 @@
 #ifndef _ASM_S390_GMAP_H
 #define _ASM_S390_GMAP_H
 
+#include <linux/radix-tree.h>
 #include <linux/refcount.h>
 
 /* Generic bits for GMAP notification on DAT table entry changes. */
@@ -61,6 +62,7 @@ struct gmap {
 	spinlock_t shadow_lock;
 	struct gmap *parent;
 	unsigned long orig_asce;
+	unsigned long guest_handle;
 	int edat_level;
 	bool removed;
 	bool initialized;
diff --git a/arch/s390/include/asm/mmu.h b/arch/s390/include/asm/mmu.h
index bcfb6371086f..e21b618ad432 100644
--- a/arch/s390/include/asm/mmu.h
+++ b/arch/s390/include/asm/mmu.h
@@ -16,6 +16,8 @@ typedef struct {
 	unsigned long asce;
 	unsigned long asce_limit;
 	unsigned long vdso_base;
+	/* The mmu context belongs to a secure guest. */
+	atomic_t is_protected;
 	/*
 	 * The following bitfields need a down_write on the mm
 	 * semaphore when they are written to. As they are only
diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
index 8d04e6f3f796..afa836014076 100644
--- a/arch/s390/include/asm/mmu_context.h
+++ b/arch/s390/include/asm/mmu_context.h
@@ -23,6 +23,7 @@ static inline int init_new_context(struct task_struct *tsk,
 	INIT_LIST_HEAD(&mm->context.gmap_list);
 	cpumask_clear(&mm->context.cpu_attach_mask);
 	atomic_set(&mm->context.flush_count, 0);
+	atomic_set(&mm->context.is_protected, 0);
 	mm->context.gmap_asce = 0;
 	mm->context.flush_mm = 0;
 	mm->context.compat_mm = test_thread_flag(TIF_31BIT);
diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
index a4d38092530a..05ea3e42a041 100644
--- a/arch/s390/include/asm/page.h
+++ b/arch/s390/include/asm/page.h
@@ -151,6 +151,11 @@ static inline int devmem_is_allowed(unsigned long pfn)
 #define HAVE_ARCH_FREE_PAGE
 #define HAVE_ARCH_ALLOC_PAGE
 
+#if IS_ENABLED(CONFIG_PGSTE)
+int arch_make_page_accessible(struct page *page);
+#define HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #define __PAGE_OFFSET		0x0UL
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 7b03037a8475..dbd1453e6924 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -19,6 +19,7 @@
 #include <linux/atomic.h>
 #include <asm/bug.h>
 #include <asm/page.h>
+#include <asm/uv.h>
 
 extern pgd_t swapper_pg_dir[];
 extern void paging_init(void);
@@ -520,6 +521,15 @@ static inline int mm_has_pgste(struct mm_struct *mm)
 	return 0;
 }
 
+static inline int mm_is_protected(struct mm_struct *mm)
+{
+#ifdef CONFIG_PGSTE
+	if (unlikely(atomic_read(&mm->context.is_protected)))
+		return 1;
+#endif
+	return 0;
+}
+
 static inline int mm_alloc_pgste(struct mm_struct *mm)
 {
 #ifdef CONFIG_PGSTE
@@ -1059,7 +1069,12 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long addr, pte_t *ptep)
 {
-	return ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
+	pte_t res;
+
+	res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
+	if (mm_is_protected(mm) && pte_present(res))
+		uv_convert_from_secure(pte_val(res) & PAGE_MASK);
+	return res;
 }
 
 #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
@@ -1071,7 +1086,12 @@ void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
 static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
 				     unsigned long addr, pte_t *ptep)
 {
-	return ptep_xchg_direct(vma->vm_mm, addr, ptep, __pte(_PAGE_INVALID));
+	pte_t res;
+
+	res = ptep_xchg_direct(vma->vm_mm, addr, ptep, __pte(_PAGE_INVALID));
+	if (mm_is_protected(vma->vm_mm) && pte_present(res))
+		uv_convert_from_secure(pte_val(res) & PAGE_MASK);
+	return res;
 }
 
 /*
@@ -1086,12 +1106,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 					    unsigned long addr,
 					    pte_t *ptep, int full)
 {
+	pte_t res;
 	if (full) {
-		pte_t pte = *ptep;
+		res = *ptep;
 		*ptep = __pte(_PAGE_INVALID);
-		return pte;
+	} else {
+		res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
 	}
-	return ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
+	if (mm_is_protected(mm) && pte_present(res))
+		uv_convert_from_secure(pte_val(res) & PAGE_MASK);
+	return res;
 }
 
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index 9e988543201f..1b97230a57ba 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -15,6 +15,7 @@
 #include <linux/errno.h>
 #include <linux/bug.h>
 #include <asm/page.h>
+#include <asm/gmap.h>
 
 #define UVC_RC_EXECUTED		0x0001
 #define UVC_RC_INV_CMD		0x0002
@@ -24,6 +25,10 @@
 
 #define UVC_CMD_QUI			0x0001
 #define UVC_CMD_INIT_UV			0x000f
+#define UVC_CMD_CONV_TO_SEC_STOR	0x0200
+#define UVC_CMD_CONV_FROM_SEC_STOR	0x0201
+#define UVC_CMD_PIN_PAGE_SHARED		0x0341
+#define UVC_CMD_UNPIN_PAGE_SHARED	0x0342
 #define UVC_CMD_SET_SHARED_ACCESS	0x1000
 #define UVC_CMD_REMOVE_SHARED_ACCESS	0x1001
 
@@ -31,8 +36,12 @@
 enum uv_cmds_inst {
 	BIT_UVC_CMD_QUI = 0,
 	BIT_UVC_CMD_INIT_UV = 1,
+	BIT_UVC_CMD_CONV_TO_SEC_STOR = 6,
+	BIT_UVC_CMD_CONV_FROM_SEC_STOR = 7,
 	BIT_UVC_CMD_SET_SHARED_ACCESS = 8,
 	BIT_UVC_CMD_REMOVE_SHARED_ACCESS = 9,
+	BIT_UVC_CMD_PIN_PAGE_SHARED = 21,
+	BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22,
 };
 
 struct uv_cb_header {
@@ -69,6 +78,19 @@ struct uv_cb_init {
 	u64 reserved28[4];
 } __packed __aligned(8);
 
+struct uv_cb_cts {
+	struct uv_cb_header header;
+	u64 reserved08[2];
+	u64 guest_handle;
+	u64 gaddr;
+} __packed __aligned(8);
+
+struct uv_cb_cfs {
+	struct uv_cb_header header;
+	u64 reserved08[2];
+	u64 paddr;
+} __packed __aligned(8);
+
 struct uv_cb_share {
 	struct uv_cb_header header;
 	u64 reserved08[3];
@@ -169,12 +191,42 @@ static inline int is_prot_virt_host(void)
 	return prot_virt_host;
 }
 
+int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
+int uv_convert_from_secure(unsigned long paddr);
+
+static inline int uv_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
+{
+	struct uv_cb_cts uvcb = {
+		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
+		.header.len = sizeof(uvcb),
+		.guest_handle = gmap->guest_handle,
+		.gaddr = gaddr,
+	};
+
+	return uv_make_secure(gmap, gaddr, &uvcb);
+}
+
 void setup_uv(void);
 void adjust_to_uv_max(unsigned long *vmax);
 #else
 #define is_prot_virt_host() 0
 static inline void setup_uv(void) {}
 static inline void adjust_to_uv_max(unsigned long *vmax) {}
+
+static inline int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
+{
+	return 0;
+}
+
+static inline int uv_convert_from_secure(unsigned long paddr)
+{
+	return 0;
+}
+
+static inline int uv_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
+{
+	return 0;
+}
 #endif
 
 #if defined(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) ||                          \
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index a06a628a88da..15ac598a3d8d 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -9,6 +9,8 @@
 #include <linux/sizes.h>
 #include <linux/bitmap.h>
 #include <linux/memblock.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
 #include <asm/facility.h>
 #include <asm/sections.h>
 #include <asm/uv.h>
@@ -99,4 +101,174 @@ void adjust_to_uv_max(unsigned long *vmax)
 	if (prot_virt_host && *vmax > uv_info.max_sec_stor_addr)
 		*vmax = uv_info.max_sec_stor_addr;
 }
+
+static int __uv_pin_shared(unsigned long paddr)
+{
+	struct uv_cb_cfs uvcb = {
+		.header.cmd	= UVC_CMD_PIN_PAGE_SHARED,
+		.header.len	= sizeof(uvcb),
+		.paddr		= paddr,
+	};
+
+	if (uv_call(0, (u64)&uvcb))
+		return -EINVAL;
+	return 0;
+}
+
+/*
+ * Requests the Ultravisor to encrypt a guest page and make it
+ * accessible to the host for paging (export).
+ *
+ * @paddr: Absolute host address of page to be exported
+ */
+int uv_convert_from_secure(unsigned long paddr)
+{
+	struct uv_cb_cfs uvcb = {
+		.header.cmd = UVC_CMD_CONV_FROM_SEC_STOR,
+		.header.len = sizeof(uvcb),
+		.paddr = paddr
+	};
+
+	uv_call(0, (u64)&uvcb);
+
+	if (uvcb.header.rc == 1 || uvcb.header.rc == 0x107)
+		return 0;
+	return -EINVAL;
+}
+
+static int expected_page_refs(struct page *page)
+{
+	int res;
+
+	res = page_mapcount(page);
+	if (PageSwapCache(page))
+		res++;
+	else if (page_mapping(page)) {
+		res++;
+		if (page_has_private(page))
+			res++;
+	}
+	return res;
+}
+
+struct conv_params {
+	struct uv_cb_header *uvcb;
+	struct page *page;
+};
+
+static int make_secure_pte(pte_t *ptep, unsigned long addr, void *data)
+{
+	struct conv_params *params = data;
+	pte_t entry = READ_ONCE(*ptep);
+	struct page *page;
+	int expected, rc = 0;
+
+	if (!pte_present(entry))
+		return -ENXIO;
+	if (pte_val(entry) & (_PAGE_INVALID | _PAGE_PROTECT))
+		return -ENXIO;
+
+	page = pte_page(entry);
+	if (page != params->page)
+		return -ENXIO;
+
+	if (PageWriteback(page))
+		return -EAGAIN;
+	expected = expected_page_refs(page);
+	if (!page_ref_freeze(page, expected))
+		return -EBUSY;
+	set_bit(PG_arch_1, &page->flags);
+	rc = uv_call(0, (u64)params->uvcb);
+	page_ref_unfreeze(page, expected);
+	if (rc)
+		rc = (params->uvcb->rc == 0x10a) ? -ENXIO : -EINVAL;
+	return rc;
+}
+
+/*
+ * Requests the Ultravisor to make a page accessible to a guest.
+ * If it's brought in the first time, it will be cleared. If
+ * it has been exported before, it will be decrypted and integrity
+ * checked.
+ *
+ * @gmap: Guest mapping
+ * @gaddr: Guest 2 absolute address to be imported
+ */
+int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
+{
+	struct conv_params params = { .uvcb = uvcb };
+	struct vm_area_struct *vma;
+	unsigned long uaddr;
+	int rc, local_drain = 0;
+
+again:
+	rc = -EFAULT;
+	down_read(&gmap->mm->mmap_sem);
+
+	uaddr = __gmap_translate(gmap, gaddr);
+	if (IS_ERR_VALUE(uaddr))
+		goto out;
+	vma = find_vma(gmap->mm, uaddr);
+	if (!vma)
+		goto out;
+	if (is_vm_hugetlb_page(vma))
+		goto out;
+
+	rc = -ENXIO;
+	params.page = follow_page(vma, uaddr, FOLL_WRITE | FOLL_NOWAIT);
+	if (IS_ERR_OR_NULL(params.page))
+		goto out;
+
+	lock_page(params.page);
+	rc = apply_to_page_range(gmap->mm, uaddr, PAGE_SIZE, make_secure_pte, &params);
+	unlock_page(params.page);
+out:
+	up_read(&gmap->mm->mmap_sem);
+
+	if (rc == -EBUSY) {
+		if (local_drain) {
+			lru_add_drain_all();
+			return -EAGAIN;
+		}
+		lru_add_drain();
+		local_drain = 1;
+		goto again;
+	} else if (rc == -ENXIO) {
+		if (gmap_fault(gmap, gaddr, FAULT_FLAG_WRITE))
+			return -EFAULT;
+		return -EAGAIN;
+	}
+	return rc;
+}
+EXPORT_SYMBOL_GPL(uv_make_secure);
+
+/**
+ * To be called with the page locked or with an extra reference!
+ */
+int arch_make_page_accessible(struct page *page)
+{
+	int rc = 0;
+
+	if (PageHuge(page))
+		return 0;
+
+	if (!test_bit(PG_arch_1, &page->flags))
+		return 0;
+
+	rc = __uv_pin_shared(page_to_phys(page));
+	if (!rc) {
+		clear_bit(PG_arch_1, &page->flags);
+		return 0;
+	}
+
+	rc = uv_convert_from_secure(page_to_phys(page));
+	if (!rc) {
+		clear_bit(PG_arch_1, &page->flags);
+		return 0;
+	}
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(arch_make_page_accessible);
+
 #endif
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
                   ` (2 preceding siblings ...)
  2020-02-07 11:39 ` [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-14 18:05   ` David Hildenbrand
  2020-02-07 11:39 ` [PATCH 10/35] KVM: s390: protvirt: Secure memory is not mergeable Christian Borntraeger
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton,
	Janosch Frank

From: Vasily Gorbik <gor@linux.ibm.com>

Add exceptions handlers performing transparent transition of non-secure
pages to secure (import) upon guest access and secure pages to
non-secure (export) upon hypervisor access.

Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
[frankja@linux.ibm.com: adding checks for failures]
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
[imbrenda@linux.ibm.com:  adding a check for gmap fault]
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
[borntraeger@de.ibm.com: patch merging, splitting, fixing]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kernel/pgm_check.S |  4 +-
 arch/s390/mm/fault.c         | 86 ++++++++++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+), 2 deletions(-)

diff --git a/arch/s390/kernel/pgm_check.S b/arch/s390/kernel/pgm_check.S
index 59dee9d3bebf..27ac4f324c70 100644
--- a/arch/s390/kernel/pgm_check.S
+++ b/arch/s390/kernel/pgm_check.S
@@ -78,8 +78,8 @@ PGM_CHECK(do_dat_exception)		/* 39 */
 PGM_CHECK(do_dat_exception)		/* 3a */
 PGM_CHECK(do_dat_exception)		/* 3b */
 PGM_CHECK_DEFAULT			/* 3c */
-PGM_CHECK_DEFAULT			/* 3d */
-PGM_CHECK_DEFAULT			/* 3e */
+PGM_CHECK(do_secure_storage_access)	/* 3d */
+PGM_CHECK(do_non_secure_storage_access)	/* 3e */
 PGM_CHECK_DEFAULT			/* 3f */
 PGM_CHECK_DEFAULT			/* 40 */
 PGM_CHECK_DEFAULT			/* 41 */
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index 7b0bb475c166..fab4219fa0be 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -38,6 +38,7 @@
 #include <asm/irq.h>
 #include <asm/mmu_context.h>
 #include <asm/facility.h>
+#include <asm/uv.h>
 #include "../kernel/entry.h"
 
 #define __FAIL_ADDR_MASK -4096L
@@ -816,3 +817,88 @@ static int __init pfault_irq_init(void)
 early_initcall(pfault_irq_init);
 
 #endif /* CONFIG_PFAULT */
+
+#if IS_ENABLED(CONFIG_KVM)
+void do_secure_storage_access(struct pt_regs *regs)
+{
+	unsigned long addr = regs->int_parm_long & __FAIL_ADDR_MASK;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	struct page *page;
+	int rc;
+
+	switch (get_fault_type(regs)) {
+	case USER_FAULT:
+		mm = current->mm;
+		down_read(&mm->mmap_sem);
+		vma = find_vma(mm, addr);
+		if (!vma) {
+			up_read(&mm->mmap_sem);
+			do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
+			break;
+		}
+		page = follow_page(vma, addr, FOLL_WRITE | FOLL_GET);
+		if (IS_ERR_OR_NULL(page)) {
+			up_read(&mm->mmap_sem);
+			break;
+		}
+		if (arch_make_page_accessible(page))
+			send_sig(SIGSEGV, current, 0);
+		put_page(page);
+		up_read(&mm->mmap_sem);
+		break;
+	case KERNEL_FAULT:
+		page = phys_to_page(addr);
+		if (unlikely(!try_get_page(page)))
+			break;
+		rc = arch_make_page_accessible(page);
+		put_page(page);
+		if (rc)
+			BUG();
+		break;
+	case VDSO_FAULT:
+		/* fallthrough */
+	case GMAP_FAULT:
+		/* fallthrough */
+	default:
+		do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
+		WARN_ON_ONCE(1);
+	}
+}
+NOKPROBE_SYMBOL(do_secure_storage_access);
+
+void do_non_secure_storage_access(struct pt_regs *regs)
+{
+	unsigned long gaddr = regs->int_parm_long & __FAIL_ADDR_MASK;
+	struct gmap *gmap = (struct gmap *)S390_lowcore.gmap;
+	struct uv_cb_cts uvcb = {
+		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
+		.header.len = sizeof(uvcb),
+		.guest_handle = gmap->guest_handle,
+		.gaddr = gaddr,
+	};
+	int rc;
+
+	if (get_fault_type(regs) != GMAP_FAULT) {
+		do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
+		WARN_ON_ONCE(1);
+		return;
+	}
+
+	rc = uv_make_secure(gmap, gaddr, &uvcb);
+	if (rc == -EINVAL && uvcb.header.rc != 0x104)
+		send_sig(SIGSEGV, current, 0);
+}
+NOKPROBE_SYMBOL(do_non_secure_storage_access);
+
+#else
+void do_secure_storage_access(struct pt_regs *regs)
+{
+	default_trap_handler(regs);
+}
+
+void do_non_secure_storage_access(struct pt_regs *regs)
+{
+	default_trap_handler(regs);
+}
+#endif
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 10/35] KVM: s390: protvirt: Secure memory is not mergeable
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
                   ` (3 preceding siblings ...)
  2020-02-07 11:39 ` [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-07 11:39 ` [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest Christian Borntraeger
  2020-02-07 11:39 ` [PATCH 21/35] KVM: s390/mm: handle guest unpin events Christian Borntraeger
  6 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, Janosch Frank, linux-mm,
	Andrew Morton

From: Janosch Frank <frankja@linux.ibm.com>

KSM will not work on secure pages, because when the kernel reads a
secure page, it will be encrypted and hence no two pages will look the
same.

Let's mark the guest pages as unmergeable when we transition to secure
mode.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
[borntraeger@de.ibm.com: patch merging, splitting, fixing]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/include/asm/gmap.h |  1 +
 arch/s390/kvm/kvm-s390.c     |  8 ++++++++
 arch/s390/mm/gmap.c          | 30 ++++++++++++++++++++----------
 3 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index e2d2f48c5c7c..e1f2cc0b2b00 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -146,4 +146,5 @@ int gmap_mprotect_notify(struct gmap *, unsigned long start,
 
 void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4],
 			     unsigned long gaddr, unsigned long vmaddr);
+int gmap_mark_unmergeable(void);
 #endif /* _ASM_S390_GMAP_H */
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index e1bccbb41fdd..ad86e74e27ec 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2182,6 +2182,14 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 		if (r)
 			break;
 
+		down_write(&current->mm->mmap_sem);
+		r = gmap_mark_unmergeable();
+		up_write(&current->mm->mmap_sem);
+		if (r) {
+			kvm_s390_pv_dealloc_vm(kvm);
+			break;
+		}
+
 		mutex_lock(&kvm->lock);
 		kvm_s390_vcpu_block_all(kvm);
 		/* FMT 4 SIE needs esca */
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index edcdca97e85e..7291452fe5f0 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2548,6 +2548,22 @@ int s390_enable_sie(void)
 }
 EXPORT_SYMBOL_GPL(s390_enable_sie);
 
+int gmap_mark_unmergeable(void)
+{
+	struct mm_struct *mm = current->mm;
+	struct vm_area_struct *vma;
+
+	for (vma = mm->mmap; vma; vma = vma->vm_next) {
+		if (ksm_madvise(vma, vma->vm_start, vma->vm_end,
+				MADV_UNMERGEABLE, &vma->vm_flags)) {
+			return -ENOMEM;
+		}
+	}
+	mm->def_flags &= ~VM_MERGEABLE;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gmap_mark_unmergeable);
+
 /*
  * Enable storage key handling from now on and initialize the storage
  * keys with the default key.
@@ -2593,7 +2609,6 @@ static const struct mm_walk_ops enable_skey_walk_ops = {
 int s390_enable_skey(void)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	int rc = 0;
 
 	down_write(&mm->mmap_sem);
@@ -2601,16 +2616,11 @@ int s390_enable_skey(void)
 		goto out_up;
 
 	mm->context.uses_skeys = 1;
-	for (vma = mm->mmap; vma; vma = vma->vm_next) {
-		if (ksm_madvise(vma, vma->vm_start, vma->vm_end,
-				MADV_UNMERGEABLE, &vma->vm_flags)) {
-			mm->context.uses_skeys = 0;
-			rc = -ENOMEM;
-			goto out_up;
-		}
+	rc = gmap_mark_unmergeable();
+	if (rc) {
+		mm->context.uses_skeys = 0;
+		goto out_up;
 	}
-	mm->def_flags &= ~VM_MERGEABLE;
-
 	walk_page_range(mm, 0, TASK_SIZE, &enable_skey_walk_ops, NULL);
 
 out_up:
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
                   ` (4 preceding siblings ...)
  2020-02-07 11:39 ` [PATCH 10/35] KVM: s390: protvirt: Secure memory is not mergeable Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-14 18:40   ` David Hildenbrand
  2020-02-07 11:39 ` [PATCH 21/35] KVM: s390/mm: handle guest unpin events Christian Borntraeger
  6 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

Before we destroy the secure configuration, we better make all
pages accessible again. This also happens during reboot, where we reboot
into a non-secure guest that then can go again into secure mode. As
this "new" secure guest will have a new ID we cannot reuse the old page
state.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
---
 arch/s390/include/asm/pgtable.h |  1 +
 arch/s390/kvm/pv.c              |  2 ++
 arch/s390/mm/gmap.c             | 35 +++++++++++++++++++++++++++++++++
 3 files changed, 38 insertions(+)

diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index dbd1453e6924..3e2ea997c334 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1669,6 +1669,7 @@ extern int vmem_remove_mapping(unsigned long start, unsigned long size);
 extern int s390_enable_sie(void);
 extern int s390_enable_skey(void);
 extern void s390_reset_cmma(struct mm_struct *mm);
+extern void s390_reset_acc(struct mm_struct *mm);
 
 /* s390 has a private copy of get unmapped area to deal with cache synonyms */
 #define HAVE_ARCH_UNMAPPED_AREA
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index 4795e61f4e16..392795a92bd9 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -66,6 +66,8 @@ int kvm_s390_pv_destroy_vm(struct kvm *kvm)
 	int rc;
 	u32 ret;
 
+	/* make all pages accessible before destroying the guest */
+	s390_reset_acc(kvm->mm);
 	rc = uv_cmd_nodata(kvm_s390_pv_handle(kvm),
 			   UVC_CMD_DESTROY_SEC_CONF, &ret);
 	WRITE_ONCE(kvm->arch.gmap->guest_handle, 0);
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 7291452fe5f0..27926a06df32 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2650,3 +2650,38 @@ void s390_reset_cmma(struct mm_struct *mm)
 	up_write(&mm->mmap_sem);
 }
 EXPORT_SYMBOL_GPL(s390_reset_cmma);
+
+/*
+ * make inaccessible pages accessible again
+ */
+static int __s390_reset_acc(pte_t *ptep, unsigned long addr,
+			    unsigned long next, struct mm_walk *walk)
+{
+	pte_t pte = READ_ONCE(*ptep);
+
+	if (pte_present(pte))
+		WARN_ON_ONCE(uv_convert_from_secure(pte_val(pte) & PAGE_MASK));
+	return 0;
+}
+
+static const struct mm_walk_ops reset_acc_walk_ops = {
+	.pte_entry		= __s390_reset_acc,
+};
+
+#include <linux/sched/mm.h>
+void s390_reset_acc(struct mm_struct *mm)
+{
+	/*
+	 * we might be called during
+	 * reset:                             we walk the pages and clear
+	 * close of all kvm file descriptors: we walk the pages and clear
+	 * exit of process on fd closure:     vma already gone, do nothing
+	 */
+	if (!mmget_not_zero(mm))
+		return;
+	down_read(&mm->mmap_sem);
+	walk_page_range(mm, 0, TASK_SIZE, &reset_acc_walk_ops, NULL);
+	up_read(&mm->mmap_sem);
+	mmput(mm);
+}
+EXPORT_SYMBOL_GPL(s390_reset_acc);
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 21/35] KVM: s390/mm: handle guest unpin events
  2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
                   ` (5 preceding siblings ...)
  2020-02-07 11:39 ` [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest Christian Borntraeger
@ 2020-02-07 11:39 ` Christian Borntraeger
  2020-02-10 14:58   ` Thomas Huth
  6 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-07 11:39 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

From: Claudio Imbrenda <imbrenda@linux.ibm.com>

The current code tries to first pin shared pages, if that fails (e.g.
because the page is not shared) it will export them. For shared pages
this means that we get a new intercept telling us that the guest is
unsharing that page. We will make the page secure at that point in time
and revoke the host access. This is synchronized with other host events,
e.g. the code will wait until host I/O has finished.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
[borntraeger@de.ibm.com: patch merging, splitting, fixing]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/intercept.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index 2a966dc52611..e155389a4a66 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -16,6 +16,7 @@
 #include <asm/asm-offsets.h>
 #include <asm/irq.h>
 #include <asm/sysinfo.h>
+#include <asm/uv.h>
 
 #include "kvm-s390.h"
 #include "gaccess.h"
@@ -484,12 +485,35 @@ static int handle_pv_sclp(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static int handle_pv_uvc(struct kvm_vcpu *vcpu)
+{
+	struct uv_cb_share *guest_uvcb = (void *)vcpu->arch.sie_block->sidad;
+	struct uv_cb_cts uvcb = {
+		.header.cmd	= UVC_CMD_UNPIN_PAGE_SHARED,
+		.header.len	= sizeof(uvcb),
+		.guest_handle	= kvm_s390_pv_handle(vcpu->kvm),
+		.gaddr		= guest_uvcb->paddr,
+	};
+	int rc;
+
+	if (guest_uvcb->header.cmd != UVC_CMD_REMOVE_SHARED_ACCESS) {
+		WARN_ONCE(1, "Unexpected UVC 0x%x!\n", guest_uvcb->header.cmd);
+		return 0;
+	}
+	rc = uv_make_secure(vcpu->arch.gmap, uvcb.gaddr, &uvcb);
+	if (rc == -EINVAL && uvcb.header.rc == 0x104)
+		return 0;
+	return rc;
+}
+
 static int handle_pv_notification(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.sie_block->ipa == 0xb210)
 		return handle_pv_spx(vcpu);
 	if (vcpu->arch.sie_block->ipa == 0xb220)
 		return handle_pv_sclp(vcpu);
+	if (vcpu->arch.sie_block->ipa == 0xb9a4)
+		return handle_pv_uvc(vcpu);
 
 	return handle_instruction(vcpu);
 }
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-07 11:39 ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages Christian Borntraeger
@ 2020-02-10 12:26   ` David Hildenbrand
  2020-02-10 18:38     ` Christian Borntraeger
  2020-02-10 18:56     ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt Ulrich Weigand
  2020-02-10 12:40   ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages David Hildenbrand
  1 sibling, 2 replies; 47+ messages in thread
From: David Hildenbrand @ 2020-02-10 12:26 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton

On 07.02.20 12:39, Christian Borntraeger wrote:
> From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> 
> The adapter interrupt page containing the indicator bits is currently
> pinned. That means that a guest with many devices can pin a lot of
> memory pages in the host. This also complicates the reference tracking
> which is needed for memory management handling of protected virtual
> machines.
> We can reuse the pte notifiers to "cache" the page without pinning it.
> 
> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---

So, instead of pinning explicitly, look up the page address, cache it,
and glue its lifetime to the gmap table entry. When that entry is
changed, invalidate the cached page. On re-access, look up the page
again and register the gmap notifier for the table entry again.

[...]

>  #define MAX_S390_IO_ADAPTERS ((MAX_ISC + 1) * 8)
> diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
> index c06c89d370a7..4bfb2f8fe57c 100644
> --- a/arch/s390/kvm/interrupt.c
> +++ b/arch/s390/kvm/interrupt.c
> @@ -28,6 +28,7 @@
>  #include <asm/switch_to.h>
>  #include <asm/nmi.h>
>  #include <asm/airq.h>
> +#include <linux/pagemap.h>
>  #include "kvm-s390.h"
>  #include "gaccess.h"
>  #include "trace-s390.h"
> @@ -2328,8 +2329,8 @@ static int register_io_adapter(struct kvm_device *dev,
>  		return -ENOMEM;
>  
>  	INIT_LIST_HEAD(&adapter->maps);
> -	init_rwsem(&adapter->maps_lock);
> -	atomic_set(&adapter->nr_maps, 0);
> +	spin_lock_init(&adapter->maps_lock);
> +	adapter->nr_maps = 0;
>  	adapter->id = adapter_info.id;
>  	adapter->isc = adapter_info.isc;
>  	adapter->maskable = adapter_info.maskable;
> @@ -2375,19 +2376,15 @@ static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
>  		ret = -EFAULT;
>  		goto out;
>  	}
> -	ret = get_user_pages_fast(map->addr, 1, FOLL_WRITE, &map->page);
> -	if (ret < 0)
> -		goto out;
> -	BUG_ON(ret != 1);
> -	down_write(&adapter->maps_lock);
> -	if (atomic_inc_return(&adapter->nr_maps) < MAX_S390_ADAPTER_MAPS) {
> +	spin_lock(&adapter->maps_lock);
> +	if (adapter->nr_maps < MAX_S390_ADAPTER_MAPS) {
> +		adapter->nr_maps++;
>  		list_add_tail(&map->list, &adapter->maps);

I do wonder if we should check for duplicates. The unmap path will only
remove exactly one entry. But maybe this can never happen or is already
handled on a a higher layer.

>  }
> @@ -2430,7 +2426,6 @@ void kvm_s390_destroy_adapters(struct kvm *kvm)
>  		list_for_each_entry_safe(map, tmp,
>  					 &kvm->arch.adapters[i]->maps, list) {
>  			list_del(&map->list);
> -			put_page(map->page);
>  			kfree(map);
>  		}
>  		kfree(kvm->arch.adapters[i]);

Between the gmap being removed in kvm_arch_vcpu_destroy() and
kvm_s390_destroy_adapters(), the entries would no longer properly get
invalidated. AFAIK, removing/freeing the gmap will not trigger any
notifiers.

Not sure if that's an issue (IOW, if we can have some very weird race).
But I guess we would have similar races already :)

> @@ -2690,6 +2685,31 @@ struct kvm_device_ops kvm_flic_ops = {
>  	.destroy = flic_destroy,
>  };
>  
> +void kvm_s390_adapter_gmap_notifier(struct gmap *gmap, unsigned long start,
> +				    unsigned long end)
> +{
> +	struct kvm *kvm = gmap->private;
> +	struct s390_map_info *map, *tmp;
> +	int i;
> +
> +	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
> +		struct s390_io_adapter *adapter = kvm->arch.adapters[i];
> +
> +		if (!adapter)
> +			continue;

I have to ask very dumb: How is kvm->arch.adapters[] protected?

I don't see any explicit locking e.g., on
flic_set_attr()->register_io_adapter().


[...]> +static struct page *get_map_page(struct kvm *kvm,
> +				 struct s390_io_adapter *adapter,
> +				 u64 addr)
>  {
>  	struct s390_map_info *map;
> +	unsigned long uaddr;
> +	struct page *page;
> +	bool need_retry;
> +	int ret;
>  
>  	if (!adapter)
>  		return NULL;
> +retry:
> +	page = NULL;
> +	uaddr = 0;
> +	spin_lock(&adapter->maps_lock);
> +	list_for_each_entry(map, &adapter->maps, list)
> +		if (map->guest_addr == addr) {

Could it happen, that we don't have a fitting entry in the list?

> +			uaddr = map->addr;
> +			page = map->page;
> +			if (!page)
> +				map->page = ERR_PTR(-EBUSY);
> +			else if (IS_ERR(page) || !page_cache_get_speculative(page)) {
> +				spin_unlock(&adapter->maps_lock);
> +				goto retry;
> +			}
> +			break;
> +		}

Can we please factor out looking up the list entry to a separate
function, to be called under lock? (and e.g., use it below as well)

spin_lock(&adapter->maps_lock);
entry = fancy_new_function();
if (!entry)
	return NULL;
uaddr = entry->addr;
page = entry->page;
if (!page)
	...
spin_unlock(&adapter->maps_lock);


> +	spin_unlock(&adapter->maps_lock);
> +
> +	if (page)
> +		return page;
> +	if (!uaddr)
> +		return NULL;
>  
> -	list_for_each_entry(map, &adapter->maps, list) {
> -		if (map->guest_addr == addr)
> -			return map;
> +	down_read(&kvm->mm->mmap_sem);
> +	ret = set_pgste_bits(kvm->mm, uaddr, PGSTE_IN_BIT, PGSTE_IN_BIT);
> +	if (ret)
> +		goto fail;
> +	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
> +				    &page, NULL, NULL);
> +	if (ret < 1)
> +		page = NULL;
> +fail:
> +	up_read(&kvm->mm->mmap_sem);
> +	need_retry = true;
> +	spin_lock(&adapter->maps_lock);
> +	list_for_each_entry(map, &adapter->maps, list)
> +		if (map->guest_addr == addr) {

Could it happen that our entry is suddenly no longer in the list?

> +			if (map->page == ERR_PTR(-EBUSY)) {
> +				map->page = page;
> +				need_retry = false;
> +			} else if (IS_ERR(map->page)) {

else if (map->page == ERR_PTR(-EINVAL)

or simpy "else" (every other value would be a BUG_ON, right?)

/* race with a notifier - don't store the entry and retry */

> +				map->page = NULL;> +			}



> +			break;
> +		}
> +	spin_unlock(&adapter->maps_lock);
> +	if (need_retry) {
> +		if (page)
> +			put_page(page);
> +		goto retry;
>  	}
> -	return NULL;
> +
> +	return page;

Wow, this function is ... special. Took me way to long to figure out
what is going on here. We certainly need comments in there.

I can see that

- ERR_PTR(-EBUSY) is used when somebody is about to do the
  get_user_pages_remote(). others have to loop until that is resolved.
- ERR_PTR(-EINVAL) is used when the entry gets invalidated by the
  notifier while somebody is about to set it (while still
  ERR_PTR(-EBUSY)). The one currently processing the entry will
  eventually set it back to NULL.

I think we should make this clearer by only setting ERR_PTR(-EINVAL) in
the notifier if already ERR_PTR(-EBUSY), along with a comment.

Can we document the values for map->page and how they are to be handled
right in the struct?

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-07 11:39 ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages Christian Borntraeger
  2020-02-10 12:26   ` David Hildenbrand
@ 2020-02-10 12:40   ` David Hildenbrand
  1 sibling, 0 replies; 47+ messages in thread
From: David Hildenbrand @ 2020-02-10 12:40 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton

[...]

> +void kvm_s390_adapter_gmap_notifier(struct gmap *gmap, unsigned long start,
> +				    unsigned long end)
> +{
> +	struct kvm *kvm = gmap->private;
> +	struct s390_map_info *map, *tmp;
> +	int i;
> +
> +	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
> +		struct s390_io_adapter *adapter = kvm->arch.adapters[i];
> +
> +		if (!adapter)
> +			continue;
> +		spin_lock(&adapter->maps_lock);
> +		list_for_each_entry_safe(map, tmp, &adapter->maps, list) {

list_for_each_entry() is sufficient, we are not removing entries.

> +			if (start <= map->guest_addr && map->guest_addr < end) {
> +				if (IS_ERR(map->page))
> +					map->page = ERR_PTR(-EAGAIN);
> +				else
> +					map->page = NULL;
> +			}

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 21/35] KVM: s390/mm: handle guest unpin events
  2020-02-07 11:39 ` [PATCH 21/35] KVM: s390/mm: handle guest unpin events Christian Borntraeger
@ 2020-02-10 14:58   ` Thomas Huth
  2020-02-11 13:21     ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Thomas Huth @ 2020-02-10 14:58 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton

On 07/02/2020 12.39, Christian Borntraeger wrote:
> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> 
> The current code tries to first pin shared pages, if that fails (e.g.
> because the page is not shared) it will export them. For shared pages
> this means that we get a new intercept telling us that the guest is
> unsharing that page. We will make the page secure at that point in time
> and revoke the host access. This is synchronized with other host events,
> e.g. the code will wait until host I/O has finished.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  arch/s390/kvm/intercept.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
> index 2a966dc52611..e155389a4a66 100644
> --- a/arch/s390/kvm/intercept.c
> +++ b/arch/s390/kvm/intercept.c
> @@ -16,6 +16,7 @@
>  #include <asm/asm-offsets.h>
>  #include <asm/irq.h>
>  #include <asm/sysinfo.h>
> +#include <asm/uv.h>
>  
>  #include "kvm-s390.h"
>  #include "gaccess.h"
> @@ -484,12 +485,35 @@ static int handle_pv_sclp(struct kvm_vcpu *vcpu)
>  	return 0;
>  }
>  
> +static int handle_pv_uvc(struct kvm_vcpu *vcpu)
> +{
> +	struct uv_cb_share *guest_uvcb = (void *)vcpu->arch.sie_block->sidad;
> +	struct uv_cb_cts uvcb = {
> +		.header.cmd	= UVC_CMD_UNPIN_PAGE_SHARED,
> +		.header.len	= sizeof(uvcb),
> +		.guest_handle	= kvm_s390_pv_handle(vcpu->kvm),
> +		.gaddr		= guest_uvcb->paddr,
> +	};
> +	int rc;
> +
> +	if (guest_uvcb->header.cmd != UVC_CMD_REMOVE_SHARED_ACCESS) {
> +		WARN_ONCE(1, "Unexpected UVC 0x%x!\n", guest_uvcb->header.cmd);

Is there a way to signal the failed command to the guest, too?

 Thomas


> +		return 0;
> +	}
> +	rc = uv_make_secure(vcpu->arch.gmap, uvcb.gaddr, &uvcb);
> +	if (rc == -EINVAL && uvcb.header.rc == 0x104)
> +		return 0;
> +	return rc;
> +}
> +
>  static int handle_pv_notification(struct kvm_vcpu *vcpu)
>  {
>  	if (vcpu->arch.sie_block->ipa == 0xb210)
>  		return handle_pv_spx(vcpu);
>  	if (vcpu->arch.sie_block->ipa == 0xb220)
>  		return handle_pv_sclp(vcpu);
> +	if (vcpu->arch.sie_block->ipa == 0xb9a4)
> +		return handle_pv_uvc(vcpu);
>  
>  	return handle_instruction(vcpu);
>  }
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
@ 2020-02-10 17:27   ` Christian Borntraeger
  2020-02-11 11:26     ` Will Deacon
  2020-02-13 19:56     ` Sean Christopherson
  2020-02-10 18:17   ` David Hildenbrand
  2020-02-18  3:36   ` Tian, Kevin
  2 siblings, 2 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-10 17:27 UTC (permalink / raw)
  To: Janosch Frank, Andrew Morton, Marc Zyngier, Sean Christopherson,
	Tom Lendacky
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini

CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
use for this on KVM/ARM in the future?

CC Sean Christopherson/Tom Lendacky. Any obvious use case for Intel/AMD
to have a callback before a page is used for I/O?

Andrew (or other mm people) any chance to get an ACK for this change?
I could then carry that via s390 or KVM tree. Or if you want to carry
that yourself I can send an updated version (we need to kind of 
synchronize that Linus will pull the KVM changes after the mm changes).

Andrea asked if others would benefit from this, so here are some more
information about this (and I can also put this into the patch
description).  So we have talked to the POWER folks. They do not use
the standard normal memory management, instead they have a hard split
between secure and normal memory. The secure memory  is the handled by
the hypervisor as device memory and the ultravisor and the hypervisor
move this forth and back when needed.

On s390 there is no *separate* pool of physical pages that are secure.
Instead, *any* physical page can be marked as secure or not, by
setting a bit in a per-page data structure that hardware uses to stop
unauthorized access.  (That bit is under control of the ultravisor.)

Note that one side effect of this strategy is that the decision
*which* secure pages to encrypt and then swap out is actually done by
the hypervisor, not the ultravisor.  In our case, the hypervisor is
Linux/KVM, so we're using the regular Linux memory management scheme
(active/inactive LRU lists etc.) to make this decision.  The advantage
is that the Ultravisor code does not need to itself implement any
memory management code, making it a lot simpler.

However, in the end this is why we need the hook into Linux memory
management: once Linux has decided to swap a page out, we need to get
a chance to tell the Ultravisor to "export" the page (i.e., encrypt
its contents and mark it no longer secure).

As outlined below this should be a no-op for anybody not opting in.

Christian                                   

On 07.02.20 12:39, Christian Borntraeger wrote:
> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> 
> With the introduction of protected KVM guests on s390 there is now a
> concept of inaccessible pages. These pages need to be made accessible
> before the host can access them.
> 
> While cpu accesses will trigger a fault that can be resolved, I/O
> accesses will just fail.  We need to add a callback into architecture
> code for places that will do I/O, namely when writeback is started or
> when a page reference is taken.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  include/linux/gfp.h | 6 ++++++
>  mm/gup.c            | 2 ++
>  mm/page-writeback.c | 1 +
>  3 files changed, 9 insertions(+)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index e5b817cb86e7..be2754841369 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -485,6 +485,12 @@ static inline void arch_free_page(struct page *page, int order) { }
>  #ifndef HAVE_ARCH_ALLOC_PAGE
>  static inline void arch_alloc_page(struct page *page, int order) { }
>  #endif
> +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
> +static inline int arch_make_page_accessible(struct page *page)
> +{
> +	return 0;
> +}
> +#endif
>  
>  struct page *
>  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
> diff --git a/mm/gup.c b/mm/gup.c
> index 7646bf993b25..a01262cd2821 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -257,6 +257,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>  			page = ERR_PTR(-ENOMEM);
>  			goto out;
>  		}
> +		arch_make_page_accessible(page);
>  	}
>  	if (flags & FOLL_TOUCH) {
>  		if ((flags & FOLL_WRITE) &&
> @@ -1870,6 +1871,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  
>  		VM_BUG_ON_PAGE(compound_head(page) != head, page);
>  
> +		arch_make_page_accessible(page);
>  		SetPageReferenced(page);
>  		pages[*nr] = page;
>  		(*nr)++;
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 2caf780a42e7..0f0bd14571b1 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2806,6 +2806,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
>  		inc_lruvec_page_state(page, NR_WRITEBACK);
>  		inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
>  	}
> +	arch_make_page_accessible(page);
>  	unlock_page_memcg(page);

As outlined by Ulrich, we can move the callback after the unlock.

>  	return ret;
>  
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
  2020-02-10 17:27   ` Christian Borntraeger
@ 2020-02-10 18:17   ` David Hildenbrand
  2020-02-10 18:28     ` Christian Borntraeger
  2020-02-18  3:36   ` Tian, Kevin
  2 siblings, 1 reply; 47+ messages in thread
From: David Hildenbrand @ 2020-02-10 18:17 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton, Michal Hocko

On 07.02.20 12:39, Christian Borntraeger wrote:
> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> 
> With the introduction of protected KVM guests on s390 there is now a
> concept of inaccessible pages. These pages need to be made accessible
> before the host can access them.
> 
> While cpu accesses will trigger a fault that can be resolved, I/O
> accesses will just fail.  We need to add a callback into architecture
> code for places that will do I/O, namely when writeback is started or
> when a page reference is taken.

My question would be: What guarantees that the page will stay accessible
(for I/O)? IIRC, pages can be converted back to secure/inaccessible
whenever the guest wants to access them. How will that be dealt with?

I would assume some magic counter that tracks if the page still has to
remain accessible. Once all clients that require the page to be
"accessible" on the I/O path are done, the page can be made inaccessible
again. But then, I would assume there would be something like a

/* make page accessible and make sure the page will remain accessible */
arch_get_page_accessible(page);

/* we're done dealing with the page content */
arch_put_page_accessible(page);

You mention page references. I think you should elaborate how that is
expected to work in the patch description more detailed.


(side note: I assume you guys have a plan for dealing with kdump wanting
to dump inaccessible pages. the kexec kernel would have to talk to the
UV to convert pages - and also make pages accessible on the I/O path I
guess - or one would want to mark and skip encrypted pages completely in
kdump somehow, as the content is essentially garbage)



cc Michal (not sure if your area of expertise)
https://lore.kernel.org/kvm/20200207113958.7320-2-borntraeger@de.ibm.com/

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-10 18:17   ` David Hildenbrand
@ 2020-02-10 18:28     ` Christian Borntraeger
  2020-02-10 18:43       ` David Hildenbrand
  0 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-10 18:28 UTC (permalink / raw)
  To: David Hildenbrand, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton, Michal Hocko



On 10.02.20 19:17, David Hildenbrand wrote:
> On 07.02.20 12:39, Christian Borntraeger wrote:
>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>
>> With the introduction of protected KVM guests on s390 there is now a
>> concept of inaccessible pages. These pages need to be made accessible
>> before the host can access them.
>>
>> While cpu accesses will trigger a fault that can be resolved, I/O
>> accesses will just fail.  We need to add a callback into architecture
>> code for places that will do I/O, namely when writeback is started or
>> when a page reference is taken.
> 
> My question would be: What guarantees that the page will stay accessible
> (for I/O)? IIRC, pages can be converted back to secure/inaccessible
> whenever the guest wants to access them. How will that be dealt with?

Yes, in patch 5 we do use the page lock, PageWriteBack and page_ref_freeze
to only make the page secure again if no I/O is going to be started or
still running.

We have minimized the common code impact (just these 3 callbacks) so that 
architecture code can do the right thing.

> 
> I would assume some magic counter that tracks if the page still has to
> remain accessible. Once all clients that require the page to be
> "accessible" on the I/O path are done, the page can be made inaccessible
> again. But then, I would assume there would be something like a
> 
> /* make page accessible and make sure the page will remain accessible */
> arch_get_page_accessible(page);
> 
> /* we're done dealing with the page content */
> arch_put_page_accessible(page);
> 
> You mention page references. I think you should elaborate how that is
> expected to work in the patch description more detailed.
> 
> 
> (side note: I assume you guys have a plan for dealing with kdump wanting
> to dump inaccessible pages. the kexec kernel would have to talk to the
> UV to convert pages - and also make pages accessible on the I/O path I
> guess - or one would want to mark and skip encrypted pages completely in
> kdump somehow, as the content is essentially garbage)

On kexec and kdump the ultravisor is called as part of the the diagnose
308 subcodes 0 and 1 to make sure that a: kdump works (no fault on a 
previously secure page) and b: the content of the secure page is no 
longer accessible.




^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-10 12:26   ` David Hildenbrand
@ 2020-02-10 18:38     ` Christian Borntraeger
  2020-02-10 19:33       ` David Hildenbrand
  2020-02-10 18:56     ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt Ulrich Weigand
  1 sibling, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-10 18:38 UTC (permalink / raw)
  To: David Hildenbrand, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton



On 10.02.20 13:26, David Hildenbrand wrote:
> On 07.02.20 12:39, Christian Borntraeger wrote:
>> From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
>>
>> The adapter interrupt page containing the indicator bits is currently
>> pinned. That means that a guest with many devices can pin a lot of
>> memory pages in the host. This also complicates the reference tracking
>> which is needed for memory management handling of protected virtual
>> machines.
>> We can reuse the pte notifiers to "cache" the page without pinning it.
>>
>> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
>> Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
>> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>> ---
> 
> So, instead of pinning explicitly, look up the page address, cache it,
> and glue its lifetime to the gmap table entry. When that entry is
> changed, invalidate the cached page. On re-access, look up the page
> again and register the gmap notifier for the table entry again.

I think I might want to split this into two parts.
part 1: a naive approach that always does get_user_pages_remote/put_page
part 2: do the complex caching

Ulrich mentioned that this actually could make the map/unmap a no-op as we
have the address and bit already in the irq route. In the end this might be
as fast as todays pinning as we replace a list walk with a page table walk. 
Plus it would simplify the code. Will have a look if that is the case.

> 
> [...]
> 
>>  #define MAX_S390_IO_ADAPTERS ((MAX_ISC + 1) * 8)
>> diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
>> index c06c89d370a7..4bfb2f8fe57c 100644
>> --- a/arch/s390/kvm/interrupt.c
>> +++ b/arch/s390/kvm/interrupt.c
>> @@ -28,6 +28,7 @@
>>  #include <asm/switch_to.h>
>>  #include <asm/nmi.h>
>>  #include <asm/airq.h>
>> +#include <linux/pagemap.h>
>>  #include "kvm-s390.h"
>>  #include "gaccess.h"
>>  #include "trace-s390.h"
>> @@ -2328,8 +2329,8 @@ static int register_io_adapter(struct kvm_device *dev,
>>  		return -ENOMEM;
>>  
>>  	INIT_LIST_HEAD(&adapter->maps);
>> -	init_rwsem(&adapter->maps_lock);
>> -	atomic_set(&adapter->nr_maps, 0);
>> +	spin_lock_init(&adapter->maps_lock);
>> +	adapter->nr_maps = 0;
>>  	adapter->id = adapter_info.id;
>>  	adapter->isc = adapter_info.isc;
>>  	adapter->maskable = adapter_info.maskable;
>> @@ -2375,19 +2376,15 @@ static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
>>  		ret = -EFAULT;
>>  		goto out;
>>  	}
>> -	ret = get_user_pages_fast(map->addr, 1, FOLL_WRITE, &map->page);
>> -	if (ret < 0)
>> -		goto out;
>> -	BUG_ON(ret != 1);
>> -	down_write(&adapter->maps_lock);
>> -	if (atomic_inc_return(&adapter->nr_maps) < MAX_S390_ADAPTER_MAPS) {
>> +	spin_lock(&adapter->maps_lock);
>> +	if (adapter->nr_maps < MAX_S390_ADAPTER_MAPS) {
>> +		adapter->nr_maps++;
>>  		list_add_tail(&map->list, &adapter->maps);
> 
> I do wonder if we should check for duplicates. The unmap path will only
> remove exactly one entry. But maybe this can never happen or is already
> handled on a a higher layer.


This would be a broken userspace, but I also do not see a what would break
in the host if this happens.


> 
>>  }
>> @@ -2430,7 +2426,6 @@ void kvm_s390_destroy_adapters(struct kvm *kvm)
>>  		list_for_each_entry_safe(map, tmp,
>>  					 &kvm->arch.adapters[i]->maps, list) {
>>  			list_del(&map->list);
>> -			put_page(map->page);
>>  			kfree(map);
>>  		}
>>  		kfree(kvm->arch.adapters[i]);
> 
> Between the gmap being removed in kvm_arch_vcpu_destroy() and
> kvm_s390_destroy_adapters(), the entries would no longer properly get
> invalidated. AFAIK, removing/freeing the gmap will not trigger any
> notifiers.
> 
> Not sure if that's an issue (IOW, if we can have some very weird race).
> But I guess we would have similar races already :)

This is only called when all file descriptors are closed and this also closes
all irq routes. So I guess no I/O should be going on any more. 

> 
>> @@ -2690,6 +2685,31 @@ struct kvm_device_ops kvm_flic_ops = {
>>  	.destroy = flic_destroy,
>>  };
>>  
>> +void kvm_s390_adapter_gmap_notifier(struct gmap *gmap, unsigned long start,
>> +				    unsigned long end)
>> +{
>> +	struct kvm *kvm = gmap->private;
>> +	struct s390_map_info *map, *tmp;
>> +	int i;
>> +
>> +	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
>> +		struct s390_io_adapter *adapter = kvm->arch.adapters[i];
>> +
>> +		if (!adapter)
>> +			continue;
> 
> I have to ask very dumb: How is kvm->arch.adapters[] protected?

We only add new ones and this is removed at guest teardown it seems.
[...]

Let me have a look if we can simplify this.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-10 18:28     ` Christian Borntraeger
@ 2020-02-10 18:43       ` David Hildenbrand
  2020-02-10 18:51         ` Christian Borntraeger
  0 siblings, 1 reply; 47+ messages in thread
From: David Hildenbrand @ 2020-02-10 18:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton, Michal Hocko

On 10.02.20 19:28, Christian Borntraeger wrote:
> 
> 
> On 10.02.20 19:17, David Hildenbrand wrote:
>> On 07.02.20 12:39, Christian Borntraeger wrote:
>>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>>
>>> With the introduction of protected KVM guests on s390 there is now a
>>> concept of inaccessible pages. These pages need to be made accessible
>>> before the host can access them.
>>>
>>> While cpu accesses will trigger a fault that can be resolved, I/O
>>> accesses will just fail.  We need to add a callback into architecture
>>> code for places that will do I/O, namely when writeback is started or
>>> when a page reference is taken.
>>
>> My question would be: What guarantees that the page will stay accessible
>> (for I/O)? IIRC, pages can be converted back to secure/inaccessible
>> whenever the guest wants to access them. How will that be dealt with?
> 
> Yes, in patch 5 we do use the page lock, PageWriteBack and page_ref_freeze
> to only make the page secure again if no I/O is going to be started or
> still running.
> 
> We have minimized the common code impact (just these 3 callbacks) so that 
> architecture code can do the right thing.

So the magic is

+static int expected_page_refs(struct page *page)
+{
+	int res;
+
+	res = page_mapcount(page);
+	if (PageSwapCache(page))
+		res++;
+	else if (page_mapping(page)) {
+		res++;
+		if (page_has_private(page))
+			res++;
+	}
+	return res;
+}
[...]
+static int make_secure_pte(pte_t *ptep, unsigned long addr, void *data)
[...]
+	if (PageWriteback(page))
+		return -EAGAIN;
+	expected = expected_page_refs(page);
+	if (!page_ref_freeze(page, expected))
+		return -EBUSY;
[...]
+	rc = uv_call(0, (u64)params->uvcb);
+	page_ref_unfreeze(page, expected);

As long as a page is does not have the expected refcount, it cannot be
convert to secure and not used by the guest.

I assume this implies, that if a guest page is pinned somewhere (e.g.,
in KVM), it won't be usable by the guest.

Please add all these details to the patch description. I think they are
crucial to understand how this is expected to work and to be used.


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-10 18:43       ` David Hildenbrand
@ 2020-02-10 18:51         ` Christian Borntraeger
  0 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-10 18:51 UTC (permalink / raw)
  To: David Hildenbrand, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton, Michal Hocko



On 10.02.20 19:43, David Hildenbrand wrote:
> On 10.02.20 19:28, Christian Borntraeger wrote:
>>
>>
>> On 10.02.20 19:17, David Hildenbrand wrote:
>>> On 07.02.20 12:39, Christian Borntraeger wrote:
>>>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>>>
>>>> With the introduction of protected KVM guests on s390 there is now a
>>>> concept of inaccessible pages. These pages need to be made accessible
>>>> before the host can access them.
>>>>
>>>> While cpu accesses will trigger a fault that can be resolved, I/O
>>>> accesses will just fail.  We need to add a callback into architecture
>>>> code for places that will do I/O, namely when writeback is started or
>>>> when a page reference is taken.
>>>
>>> My question would be: What guarantees that the page will stay accessible
>>> (for I/O)? IIRC, pages can be converted back to secure/inaccessible
>>> whenever the guest wants to access them. How will that be dealt with?
>>
>> Yes, in patch 5 we do use the page lock, PageWriteBack and page_ref_freeze
>> to only make the page secure again if no I/O is going to be started or
>> still running.
>>
>> We have minimized the common code impact (just these 3 callbacks) so that 
>> architecture code can do the right thing.
> 
> So the magic is
> 
> +static int expected_page_refs(struct page *page)
> +{
> +	int res;
> +
> +	res = page_mapcount(page);
> +	if (PageSwapCache(page))
> +		res++;
> +	else if (page_mapping(page)) {
> +		res++;
> +		if (page_has_private(page))
> +			res++;
> +	}
> +	return res;
> +}
> [...]
> +static int make_secure_pte(pte_t *ptep, unsigned long addr, void *data)
> [...]
> +	if (PageWriteback(page))
> +		return -EAGAIN;
> +	expected = expected_page_refs(page);
> +	if (!page_ref_freeze(page, expected))
> +		return -EBUSY;
> [...]
> +	rc = uv_call(0, (u64)params->uvcb);
> +	page_ref_unfreeze(page, expected);
> 
> As long as a page is does not have the expected refcount, it cannot be
> convert to secure and not used by the guest.
> 
> I assume this implies, that if a guest page is pinned somewhere (e.g.,
> in KVM), it won't be usable by the guest.

Yes, but you can always exit QEMU nothing will "block". You you have a permanent
SIE exit. This is something that should not happen for QEMU/KVM and the expected
refcount logic can be found in many common code places. 
> 
> Please add all these details to the patch description. I think they are
> crucial to understand how this is expected to work and to be used.

Makes sense. Will add more explanation to patch 5.
Ulrich also had some idea how to simplify patch 5 in some places.




^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt
  2020-02-10 12:26   ` David Hildenbrand
  2020-02-10 18:38     ` Christian Borntraeger
@ 2020-02-10 18:56     ` Ulrich Weigand
  1 sibling, 0 replies; 47+ messages in thread
From: Ulrich Weigand @ 2020-02-10 18:56 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Christian Borntraeger, Janosch Frank, KVM, Cornelia Huck,
	Thomas Huth, Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli,
	linux-s390, Michael Mueller, Vasily Gorbik, linux-mm,
	Andrew Morton

David Hildenbrand wrote:

> So, instead of pinning explicitly, look up the page address, cache it,
> and glue its lifetime to the gmap table entry. When that entry is
> changed, invalidate the cached page. On re-access, look up the page
> again and register the gmap notifier for the table entry again.

Yes, exactly.

> [...]> +static struct page *get_map_page(struct kvm *kvm,
> > +				 struct s390_io_adapter *adapter,
> > +				 u64 addr)
> >  {
> >  	struct s390_map_info *map;
> > +	unsigned long uaddr;
> > +	struct page *page;
> > +	bool need_retry;
> > +	int ret;
> > 
> >  	if (!adapter)
> >  		return NULL;
> > +retry:
> > +	page = NULL;
> > +	uaddr = 0;
> > +	spin_lock(&adapter->maps_lock);
> > +	list_for_each_entry(map, &adapter->maps, list)
> > +		if (map->guest_addr == addr) {
> 
> Could it happen, that we don't have a fitting entry in the list?

Yes, if user space tries to signal an interrupt on a page that
was not properly announced via KVM_S390_IO_ADAPTER_MAP.

In that case, the loop returns with page == NULL and uaddr == 0,
which will cause the code below to return NULL, which will cause
the caller to return an error to user space.

> > +			uaddr = map->addr;
> > +			page = map->page;
> > +			if (!page)
> > +				map->page = ERR_PTR(-EBUSY);
> > +			else if (IS_ERR(page) || !page_cache_get_speculative(page)) {
> > +				spin_unlock(&adapter->maps_lock);
> > +				goto retry;
> > +			}
> > +			break;
> > +		}
> 
> Can we please factor out looking up the list entry to a separate
> function, to be called under lock? (and e.g., use it below as well)

Good idea, I like that.  Will update the patch ...

> > +	need_retry = true;
> > +	spin_lock(&adapter->maps_lock);
> > +	list_for_each_entry(map, &adapter->maps, list)
> > +		if (map->guest_addr == addr) {
> 
> Could it happen that our entry is suddenly no longer in the list?

Yes, if user space did a KVM_S390_IO_ADAPTER_UNMAP in the meantime.
In this case we'll exit the loop with need_retry == true and will
restart from the beginning, usually then returning an error back
to user space.

> > +			if (map->page == ERR_PTR(-EBUSY)) {
> > +				map->page = page;
> > +				need_retry = false;
> > +			} else if (IS_ERR(map->page)) {
> 
> else if (map->page == ERR_PTR(-EINVAL)
> 
> or simpy "else" (every other value would be a BUG_ON, right?)

Usually yes.  I guess there's the theoretical case that we race
with user space removing the old entry with KVM_S390_IO_ADAPTER_UNMAP
and immediately afterwards installing a new entry with the same
guest address.  In that case, we'll also fall into the need_retry
case here.

> Wow, this function is ... special. Took me way to long to figure out
> what is going on here. We certainly need comments in there.

I agree.  As Christian said, it's not fully clear that all of this
is really needed.  Maybe just doing the get_user_pages_remote every
time is actually enough -- we should do the "cache" magic only if
this is really critical for performance.

> I can see that
> 
> - ERR_PTR(-EBUSY) is used when somebody is about to do the
>   get_user_pages_remote(). others have to loop until that is resolved.
> - ERR_PTR(-EINVAL) is used when the entry gets invalidated by the
>   notifier while somebody is about to set it (while still
>   ERR_PTR(-EBUSY)). The one currently processing the entry will
>   eventually set it back to NULL.

Yes, that's the intent.
 
> I think we should make this clearer by only setting ERR_PTR(-EINVAL) in
> the notifier if already ERR_PTR(-EBUSY), along with a comment.

I guess I wanted to catch the case where we get another invalidation
while we already have -EINVAL.  But given the rest of the logic, this
shouldn't actually ever happen.  (If it *did* happen, however, then
setting to -EINVAL again is safer than resetting to NULL.)

> Can we document the values for map->page and how they are to be handled
> right in the struct?

OK, will do.

Bye,
Ulrich


-- 
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  Ulrich.Weigand@de.ibm.com



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-10 18:38     ` Christian Borntraeger
@ 2020-02-10 19:33       ` David Hildenbrand
  2020-02-11  9:23         ` [PATCH v2 RFC] " Christian Borntraeger
  0 siblings, 1 reply; 47+ messages in thread
From: David Hildenbrand @ 2020-02-10 19:33 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: David Hildenbrand, Janosch Frank, KVM, Cornelia Huck,
	Thomas Huth, Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli,
	linux-s390, Michael Mueller, Vasily Gorbik, linux-mm,
	Andrew Morton



> Am 10.02.2020 um 19:41 schrieb Christian Borntraeger <borntraeger@de.ibm.com>:
> 
> 
> 
>> On 10.02.20 13:26, David Hildenbrand wrote:
>>> On 07.02.20 12:39, Christian Borntraeger wrote:
>>> From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
>>> 
>>> The adapter interrupt page containing the indicator bits is currently
>>> pinned. That means that a guest with many devices can pin a lot of
>>> memory pages in the host. This also complicates the reference tracking
>>> which is needed for memory management handling of protected virtual
>>> machines.
>>> We can reuse the pte notifiers to "cache" the page without pinning it.
>>> 
>>> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
>>> Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
>>> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>> ---
>> 
>> So, instead of pinning explicitly, look up the page address, cache it,
>> and glue its lifetime to the gmap table entry. When that entry is
>> changed, invalidate the cached page. On re-access, look up the page
>> again and register the gmap notifier for the table entry again.
> 
> I think I might want to split this into two parts.
> part 1: a naive approach that always does get_user_pages_remote/put_page
> part 2: do the complex caching
> 
> Ulrich mentioned that this actually could make the map/unmap a no-op as we
> have the address and bit already in the irq route. In the end this might be
> as fast as todays pinning as we replace a list walk with a page table walk. 
> Plus it would simplify the code. Will have a look if that is the case.

If we could simplify that heavily, that would be awesome!



^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-10 19:33       ` David Hildenbrand
@ 2020-02-11  9:23         ` Christian Borntraeger
  2020-02-12 11:52           ` Christian Borntraeger
                             ` (2 more replies)
  0 siblings, 3 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-11  9:23 UTC (permalink / raw)
  To: david
  Cc: Ulrich.Weigand, aarcange, akpm, borntraeger, cohuck, frankja,
	gor, imbrenda, kvm, linux-mm, linux-s390, mimu, thuth

From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>

The adapter interrupt page containing the indicator bits is currently
pinned. That means that a guest with many devices can pin a lot of
memory pages in the host. This also complicates the reference tracking
which is needed for memory management handling of protected virtual
machines.
We can simply try to get the userspace page set the bits and free the
page. By storing the userspace address in the irq routing entry instead
of the guest address we can actually avoid many lookups and list walks
so that this variant is very likely not slower.

Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
[borntraeger@de.ibm.com: patch simplification]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
quick and dirty, how this could look like


 arch/s390/include/asm/kvm_host.h |   3 -
 arch/s390/kvm/interrupt.c        | 146 +++++++++++--------------------
 2 files changed, 49 insertions(+), 100 deletions(-)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 0d398738ded9..88a218872fa0 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -771,9 +771,6 @@ struct s390_io_adapter {
 	bool masked;
 	bool swap;
 	bool suppressible;
-	struct rw_semaphore maps_lock;
-	struct list_head maps;
-	atomic_t nr_maps;
 };
 
 #define MAX_S390_IO_ADAPTERS ((MAX_ISC + 1) * 8)
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index d4d35ec79e12..e6fe8b61ee9b 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -2459,9 +2459,6 @@ static int register_io_adapter(struct kvm_device *dev,
 	if (!adapter)
 		return -ENOMEM;
 
-	INIT_LIST_HEAD(&adapter->maps);
-	init_rwsem(&adapter->maps_lock);
-	atomic_set(&adapter->nr_maps, 0);
 	adapter->id = adapter_info.id;
 	adapter->isc = adapter_info.isc;
 	adapter->maskable = adapter_info.maskable;
@@ -2488,83 +2485,26 @@ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked)
 
 static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
 {
-	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
-	struct s390_map_info *map;
-	int ret;
-
-	if (!adapter || !addr)
-		return -EINVAL;
-
-	map = kzalloc(sizeof(*map), GFP_KERNEL);
-	if (!map) {
-		ret = -ENOMEM;
-		goto out;
-	}
-	INIT_LIST_HEAD(&map->list);
-	map->guest_addr = addr;
-	map->addr = gmap_translate(kvm->arch.gmap, addr);
-	if (map->addr == -EFAULT) {
-		ret = -EFAULT;
-		goto out;
-	}
-	ret = get_user_pages_fast(map->addr, 1, FOLL_WRITE, &map->page);
-	if (ret < 0)
-		goto out;
-	BUG_ON(ret != 1);
-	down_write(&adapter->maps_lock);
-	if (atomic_inc_return(&adapter->nr_maps) < MAX_S390_ADAPTER_MAPS) {
-		list_add_tail(&map->list, &adapter->maps);
-		ret = 0;
-	} else {
-		put_page(map->page);
-		ret = -EINVAL;
+	/*
+	 * We resolve the gpa to hva when setting the IRQ routing. If userspace
+	 * decides to mess with the memslots it better also updates the irq
+	 * routing. Otherwise we will write to the wrong userspace address.
+	 */
+	return 0;
 	}
-	up_write(&adapter->maps_lock);
-out:
-	if (ret)
-		kfree(map);
-	return ret;
-}
 
 static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
 {
-	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
-	struct s390_map_info *map, *tmp;
-	int found = 0;
-
-	if (!adapter || !addr)
-		return -EINVAL;
-
-	down_write(&adapter->maps_lock);
-	list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
-		if (map->guest_addr == addr) {
-			found = 1;
-			atomic_dec(&adapter->nr_maps);
-			list_del(&map->list);
-			put_page(map->page);
-			kfree(map);
-			break;
-		}
-	}
-	up_write(&adapter->maps_lock);
-
-	return found ? 0 : -EINVAL;
+	return 0;
 }
 
 void kvm_s390_destroy_adapters(struct kvm *kvm)
 {
 	int i;
-	struct s390_map_info *map, *tmp;
 
 	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
 		if (!kvm->arch.adapters[i])
 			continue;
-		list_for_each_entry_safe(map, tmp,
-					 &kvm->arch.adapters[i]->maps, list) {
-			list_del(&map->list);
-			put_page(map->page);
-			kfree(map);
-		}
 		kfree(kvm->arch.adapters[i]);
 	}
 }
@@ -2831,19 +2771,25 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
 	return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
 }
 
-static struct s390_map_info *get_map_info(struct s390_io_adapter *adapter,
-					  u64 addr)
+static struct page *get_map_page(struct kvm *kvm,
+				 struct s390_io_adapter *adapter,
+				 u64 uaddr)
 {
-	struct s390_map_info *map;
+	struct page *page;
+	int ret;
 
 	if (!adapter)
 		return NULL;
-
-	list_for_each_entry(map, &adapter->maps, list) {
-		if (map->guest_addr == addr)
-			return map;
-	}
-	return NULL;
+	page = NULL;
+	if (!uaddr)
+		return NULL;
+	down_read(&kvm->mm->mmap_sem);
+	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
+				    &page, NULL, NULL);
+	if (ret < 1)
+		page = NULL;
+	up_read(&kvm->mm->mmap_sem);
+	return page;
 }
 
 static int adapter_indicators_set(struct kvm *kvm,
@@ -2852,30 +2798,35 @@ static int adapter_indicators_set(struct kvm *kvm,
 {
 	unsigned long bit;
 	int summary_set, idx;
-	struct s390_map_info *info;
+	struct page *ind_page, *summary_page;
 	void *map;
 
-	info = get_map_info(adapter, adapter_int->ind_addr);
-	if (!info)
+	ind_page = get_map_page(kvm, adapter, adapter_int->ind_addr);
+	if (!ind_page)
 		return -1;
-	map = page_address(info->page);
-	bit = get_ind_bit(info->addr, adapter_int->ind_offset, adapter->swap);
-	set_bit(bit, map);
-	idx = srcu_read_lock(&kvm->srcu);
-	mark_page_dirty(kvm, info->guest_addr >> PAGE_SHIFT);
-	set_page_dirty_lock(info->page);
-	info = get_map_info(adapter, adapter_int->summary_addr);
-	if (!info) {
-		srcu_read_unlock(&kvm->srcu, idx);
+	summary_page = get_map_page(kvm, adapter, adapter_int->summary_addr);
+	if (!summary_page) {
+		put_page(ind_page);
 		return -1;
 	}
-	map = page_address(info->page);
-	bit = get_ind_bit(info->addr, adapter_int->summary_offset,
-			  adapter->swap);
+
+	idx = srcu_read_lock(&kvm->srcu);
+	map = page_address(ind_page);
+	bit = get_ind_bit(adapter_int->ind_addr,
+			  adapter_int->ind_offset, adapter->swap);
+	set_bit(bit, map);
+	mark_page_dirty(kvm, adapter_int->ind_addr >> PAGE_SHIFT);
+	set_page_dirty_lock(ind_page);
+	map = page_address(summary_page);
+	bit = get_ind_bit(adapter_int->summary_addr,
+			  adapter_int->summary_offset, adapter->swap);
 	summary_set = test_and_set_bit(bit, map);
-	mark_page_dirty(kvm, info->guest_addr >> PAGE_SHIFT);
-	set_page_dirty_lock(info->page);
+	mark_page_dirty(kvm, adapter_int->summary_addr >> PAGE_SHIFT);
+	set_page_dirty_lock(summary_page);
 	srcu_read_unlock(&kvm->srcu, idx);
+
+	put_page(ind_page);
+	put_page(summary_page);
 	return summary_set ? 0 : 1;
 }
 
@@ -2897,9 +2848,7 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
 	adapter = get_io_adapter(kvm, e->adapter.adapter_id);
 	if (!adapter)
 		return -1;
-	down_read(&adapter->maps_lock);
 	ret = adapter_indicators_set(kvm, adapter, &e->adapter);
-	up_read(&adapter->maps_lock);
 	if ((ret > 0) && !adapter->masked) {
 		ret = kvm_s390_inject_airq(kvm, adapter);
 		if (ret == 0)
@@ -2951,12 +2900,15 @@ int kvm_set_routing_entry(struct kvm *kvm,
 			  const struct kvm_irq_routing_entry *ue)
 {
 	int ret;
+	u64 uaddr;
 
 	switch (ue->type) {
 	case KVM_IRQ_ROUTING_S390_ADAPTER:
 		e->set = set_adapter_int;
-		e->adapter.summary_addr = ue->u.adapter.summary_addr;
-		e->adapter.ind_addr = ue->u.adapter.ind_addr;
+		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);
+		e->adapter.summary_addr = uaddr;
+		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.ind_addr);
+		e->adapter.ind_addr = uaddr;
 		e->adapter.summary_offset = ue->u.adapter.summary_offset;
 		e->adapter.ind_offset = ue->u.adapter.ind_offset;
 		e->adapter.adapter_id = ue->u.adapter.adapter_id;
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-10 17:27   ` Christian Borntraeger
@ 2020-02-11 11:26     ` Will Deacon
  2020-02-11 11:43       ` Christian Borntraeger
  2020-02-13 14:48       ` Christian Borntraeger
  2020-02-13 19:56     ` Sean Christopherson
  1 sibling, 2 replies; 47+ messages in thread
From: Will Deacon @ 2020-02-11 11:26 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Sean Christopherson,
	Tom Lendacky, KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini,
	mark.rutland, qperret, palmerdabbelt

On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
> use for this on KVM/ARM in the future?

I can't speak for Marc, but I can say that we're interested in something
like this for potentially isolating VMs from a KVM host in Android.
However, we've currently been working on the assumption that the memory
removed from the host won't usually be touched by the host (i.e. no
KSM or swapping out), so all we'd probably want at the moment is to be
able to return an error back from arch_make_page_accessible(). Its return
code is ignored in this patch :/

One thing I don't grok about the ultravisor encryption is how it avoids
replay attacks when paging back in. For example, if the host is compromised
and replaces the page contents with an old encrypted value. Are you storing
per-page metadata somewhere to ensure "freshness" of the encrypted data?

Will


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-11 11:26     ` Will Deacon
@ 2020-02-11 11:43       ` Christian Borntraeger
  2020-02-13 14:48       ` Christian Borntraeger
  1 sibling, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-11 11:43 UTC (permalink / raw)
  To: Will Deacon
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Sean Christopherson,
	Tom Lendacky, KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini,
	mark.rutland, qperret, palmerdabbelt



On 11.02.20 12:26, Will Deacon wrote:
> On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
>> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
>> use for this on KVM/ARM in the future?
> 
> I can't speak for Marc, but I can say that we're interested in something
> like this for potentially isolating VMs from a KVM host in Android.
> However, we've currently been working on the assumption that the memory
> removed from the host won't usually be touched by the host (i.e. no
> KSM or swapping out), so all we'd probably want at the moment is to be
> able to return an error back from arch_make_page_accessible(). Its return
> code is ignored in this patch :/
> 
> One thing I don't grok about the ultravisor encryption is how it avoids
> replay attacks when paging back in. For example, if the host is compromised
> and replaces the page contents with an old encrypted value. Are you storing
> per-page metadata somewhere to ensure "freshness" of the encrypted data?

Cant talk about the others, but on s390 the ultravisor stores counter,
tweak, address and hashing information. No replay or page exchange within
the guest is possible. (We can move the guest content to a different host
page though be using the export/import as this will revalidate the 
correctness from the guest point of view)



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 21/35] KVM: s390/mm: handle guest unpin events
  2020-02-10 14:58   ` Thomas Huth
@ 2020-02-11 13:21     ` Cornelia Huck
  0 siblings, 0 replies; 47+ messages in thread
From: Cornelia Huck @ 2020-02-11 13:21 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, KVM, David Hildenbrand,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

On Mon, 10 Feb 2020 15:58:11 +0100
Thomas Huth <thuth@redhat.com> wrote:

> On 07/02/2020 12.39, Christian Borntraeger wrote:
> > From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > 
> > The current code tries to first pin shared pages, if that fails (e.g.
> > because the page is not shared) it will export them. For shared pages
> > this means that we get a new intercept telling us that the guest is
> > unsharing that page. We will make the page secure at that point in time
> > and revoke the host access. This is synchronized with other host events,
> > e.g. the code will wait until host I/O has finished.
> > 
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > [borntraeger@de.ibm.com: patch merging, splitting, fixing]
> > Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> > ---
> >  arch/s390/kvm/intercept.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> > 
> > diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
> > index 2a966dc52611..e155389a4a66 100644
> > --- a/arch/s390/kvm/intercept.c
> > +++ b/arch/s390/kvm/intercept.c
> > @@ -16,6 +16,7 @@
> >  #include <asm/asm-offsets.h>
> >  #include <asm/irq.h>
> >  #include <asm/sysinfo.h>
> > +#include <asm/uv.h>
> >  
> >  #include "kvm-s390.h"
> >  #include "gaccess.h"
> > @@ -484,12 +485,35 @@ static int handle_pv_sclp(struct kvm_vcpu *vcpu)
> >  	return 0;
> >  }
> >  
> > +static int handle_pv_uvc(struct kvm_vcpu *vcpu)
> > +{
> > +	struct uv_cb_share *guest_uvcb = (void *)vcpu->arch.sie_block->sidad;
> > +	struct uv_cb_cts uvcb = {
> > +		.header.cmd	= UVC_CMD_UNPIN_PAGE_SHARED,
> > +		.header.len	= sizeof(uvcb),
> > +		.guest_handle	= kvm_s390_pv_handle(vcpu->kvm),
> > +		.gaddr		= guest_uvcb->paddr,
> > +	};
> > +	int rc;
> > +
> > +	if (guest_uvcb->header.cmd != UVC_CMD_REMOVE_SHARED_ACCESS) {
> > +		WARN_ONCE(1, "Unexpected UVC 0x%x!\n", guest_uvcb->header.cmd);  
> 
> Is there a way to signal the failed command to the guest, too?

I'm wondering at which layer the actual problem occurs here. Is it
because a (new) command was not interpreted or rejected by the
ultravisor so that it ended up being handled by the hypervisor? If so,
what should the guest know?

> 
>  Thomas
> 
> 
> > +		return 0;
> > +	}
> > +	rc = uv_make_secure(vcpu->arch.gmap, uvcb.gaddr, &uvcb);
> > +	if (rc == -EINVAL && uvcb.header.rc == 0x104)

This wants a comment.

> > +		return 0;
> > +	return rc;
> > +}
> > +
> >  static int handle_pv_notification(struct kvm_vcpu *vcpu)
> >  {
> >  	if (vcpu->arch.sie_block->ipa == 0xb210)
> >  		return handle_pv_spx(vcpu);
> >  	if (vcpu->arch.sie_block->ipa == 0xb220)
> >  		return handle_pv_sclp(vcpu);
> > +	if (vcpu->arch.sie_block->ipa == 0xb9a4)
> > +		return handle_pv_uvc(vcpu);

Is it defined by the architecture what the possible commands are
for which the hypervisor may get control? If we get something
unexpected, is returning 0 the right strategy?

> >  
> >  	return handle_instruction(vcpu);
> >  }
> >   
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-11  9:23         ` [PATCH v2 RFC] " Christian Borntraeger
@ 2020-02-12 11:52           ` Christian Borntraeger
  2020-02-12 12:16           ` David Hildenbrand
  2020-02-12 12:39           ` Cornelia Huck
  2 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-12 11:52 UTC (permalink / raw)
  To: david
  Cc: Ulrich.Weigand, aarcange, akpm, cohuck, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth

I pushed that variant to my next branch. this should trigger several regression runs
in regard to function and performance for normal KVM guests.
Lets see if this has any impact at all. If not this could be the simplest solution that
also simplifies a lot of code.

On 11.02.20 10:23, Christian Borntraeger wrote:
> From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> 
> The adapter interrupt page containing the indicator bits is currently
> pinned. That means that a guest with many devices can pin a lot of
> memory pages in the host. This also complicates the reference tracking
> which is needed for memory management handling of protected virtual
> machines.
> We can simply try to get the userspace page set the bits and free the
> page. By storing the userspace address in the irq routing entry instead
> of the guest address we can actually avoid many lookups and list walks
> so that this variant is very likely not slower.
> 
> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> [borntraeger@de.ibm.com: patch simplification]
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
> quick and dirty, how this could look like
> 
> 
>  arch/s390/include/asm/kvm_host.h |   3 -
>  arch/s390/kvm/interrupt.c        | 146 +++++++++++--------------------
>  2 files changed, 49 insertions(+), 100 deletions(-)
> 
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 0d398738ded9..88a218872fa0 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -771,9 +771,6 @@ struct s390_io_adapter {
>  	bool masked;
>  	bool swap;
>  	bool suppressible;
> -	struct rw_semaphore maps_lock;
> -	struct list_head maps;
> -	atomic_t nr_maps;
>  };
>  
>  #define MAX_S390_IO_ADAPTERS ((MAX_ISC + 1) * 8)
> diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
> index d4d35ec79e12..e6fe8b61ee9b 100644
> --- a/arch/s390/kvm/interrupt.c
> +++ b/arch/s390/kvm/interrupt.c
> @@ -2459,9 +2459,6 @@ static int register_io_adapter(struct kvm_device *dev,
>  	if (!adapter)
>  		return -ENOMEM;
>  
> -	INIT_LIST_HEAD(&adapter->maps);
> -	init_rwsem(&adapter->maps_lock);
> -	atomic_set(&adapter->nr_maps, 0);
>  	adapter->id = adapter_info.id;
>  	adapter->isc = adapter_info.isc;
>  	adapter->maskable = adapter_info.maskable;
> @@ -2488,83 +2485,26 @@ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked)
>  
>  static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
>  {
> -	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
> -	struct s390_map_info *map;
> -	int ret;
> -
> -	if (!adapter || !addr)
> -		return -EINVAL;
> -
> -	map = kzalloc(sizeof(*map), GFP_KERNEL);
> -	if (!map) {
> -		ret = -ENOMEM;
> -		goto out;
> -	}
> -	INIT_LIST_HEAD(&map->list);
> -	map->guest_addr = addr;
> -	map->addr = gmap_translate(kvm->arch.gmap, addr);
> -	if (map->addr == -EFAULT) {
> -		ret = -EFAULT;
> -		goto out;
> -	}
> -	ret = get_user_pages_fast(map->addr, 1, FOLL_WRITE, &map->page);
> -	if (ret < 0)
> -		goto out;
> -	BUG_ON(ret != 1);
> -	down_write(&adapter->maps_lock);
> -	if (atomic_inc_return(&adapter->nr_maps) < MAX_S390_ADAPTER_MAPS) {
> -		list_add_tail(&map->list, &adapter->maps);
> -		ret = 0;
> -	} else {
> -		put_page(map->page);
> -		ret = -EINVAL;
> +	/*
> +	 * We resolve the gpa to hva when setting the IRQ routing. If userspace
> +	 * decides to mess with the memslots it better also updates the irq
> +	 * routing. Otherwise we will write to the wrong userspace address.
> +	 */
> +	return 0;
>  	}
> -	up_write(&adapter->maps_lock);
> -out:
> -	if (ret)
> -		kfree(map);
> -	return ret;
> -}
>  
>  static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
>  {
> -	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
> -	struct s390_map_info *map, *tmp;
> -	int found = 0;
> -
> -	if (!adapter || !addr)
> -		return -EINVAL;
> -
> -	down_write(&adapter->maps_lock);
> -	list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
> -		if (map->guest_addr == addr) {
> -			found = 1;
> -			atomic_dec(&adapter->nr_maps);
> -			list_del(&map->list);
> -			put_page(map->page);
> -			kfree(map);
> -			break;
> -		}
> -	}
> -	up_write(&adapter->maps_lock);
> -
> -	return found ? 0 : -EINVAL;
> +	return 0;
>  }
>  
>  void kvm_s390_destroy_adapters(struct kvm *kvm)
>  {
>  	int i;
> -	struct s390_map_info *map, *tmp;
>  
>  	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
>  		if (!kvm->arch.adapters[i])
>  			continue;
> -		list_for_each_entry_safe(map, tmp,
> -					 &kvm->arch.adapters[i]->maps, list) {
> -			list_del(&map->list);
> -			put_page(map->page);
> -			kfree(map);
> -		}
>  		kfree(kvm->arch.adapters[i]);
>  	}
>  }
> @@ -2831,19 +2771,25 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
>  	return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
>  }
>  
> -static struct s390_map_info *get_map_info(struct s390_io_adapter *adapter,
> -					  u64 addr)
> +static struct page *get_map_page(struct kvm *kvm,
> +				 struct s390_io_adapter *adapter,
> +				 u64 uaddr)
>  {
> -	struct s390_map_info *map;
> +	struct page *page;
> +	int ret;
>  
>  	if (!adapter)
>  		return NULL;
> -
> -	list_for_each_entry(map, &adapter->maps, list) {
> -		if (map->guest_addr == addr)
> -			return map;
> -	}
> -	return NULL;
> +	page = NULL;
> +	if (!uaddr)
> +		return NULL;
> +	down_read(&kvm->mm->mmap_sem);
> +	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
> +				    &page, NULL, NULL);
> +	if (ret < 1)
> +		page = NULL;
> +	up_read(&kvm->mm->mmap_sem);
> +	return page;
>  }
>  
>  static int adapter_indicators_set(struct kvm *kvm,
> @@ -2852,30 +2798,35 @@ static int adapter_indicators_set(struct kvm *kvm,
>  {
>  	unsigned long bit;
>  	int summary_set, idx;
> -	struct s390_map_info *info;
> +	struct page *ind_page, *summary_page;
>  	void *map;
>  
> -	info = get_map_info(adapter, adapter_int->ind_addr);
> -	if (!info)
> +	ind_page = get_map_page(kvm, adapter, adapter_int->ind_addr);
> +	if (!ind_page)
>  		return -1;
> -	map = page_address(info->page);
> -	bit = get_ind_bit(info->addr, adapter_int->ind_offset, adapter->swap);
> -	set_bit(bit, map);
> -	idx = srcu_read_lock(&kvm->srcu);
> -	mark_page_dirty(kvm, info->guest_addr >> PAGE_SHIFT);
> -	set_page_dirty_lock(info->page);
> -	info = get_map_info(adapter, adapter_int->summary_addr);
> -	if (!info) {
> -		srcu_read_unlock(&kvm->srcu, idx);
> +	summary_page = get_map_page(kvm, adapter, adapter_int->summary_addr);
> +	if (!summary_page) {
> +		put_page(ind_page);
>  		return -1;
>  	}
> -	map = page_address(info->page);
> -	bit = get_ind_bit(info->addr, adapter_int->summary_offset,
> -			  adapter->swap);
> +
> +	idx = srcu_read_lock(&kvm->srcu);
> +	map = page_address(ind_page);
> +	bit = get_ind_bit(adapter_int->ind_addr,
> +			  adapter_int->ind_offset, adapter->swap);
> +	set_bit(bit, map);
> +	mark_page_dirty(kvm, adapter_int->ind_addr >> PAGE_SHIFT);
> +	set_page_dirty_lock(ind_page);
> +	map = page_address(summary_page);
> +	bit = get_ind_bit(adapter_int->summary_addr,
> +			  adapter_int->summary_offset, adapter->swap);
>  	summary_set = test_and_set_bit(bit, map);
> -	mark_page_dirty(kvm, info->guest_addr >> PAGE_SHIFT);
> -	set_page_dirty_lock(info->page);
> +	mark_page_dirty(kvm, adapter_int->summary_addr >> PAGE_SHIFT);
> +	set_page_dirty_lock(summary_page);
>  	srcu_read_unlock(&kvm->srcu, idx);
> +
> +	put_page(ind_page);
> +	put_page(summary_page);
>  	return summary_set ? 0 : 1;
>  }
>  
> @@ -2897,9 +2848,7 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
>  	adapter = get_io_adapter(kvm, e->adapter.adapter_id);
>  	if (!adapter)
>  		return -1;
> -	down_read(&adapter->maps_lock);
>  	ret = adapter_indicators_set(kvm, adapter, &e->adapter);
> -	up_read(&adapter->maps_lock);
>  	if ((ret > 0) && !adapter->masked) {
>  		ret = kvm_s390_inject_airq(kvm, adapter);
>  		if (ret == 0)
> @@ -2951,12 +2900,15 @@ int kvm_set_routing_entry(struct kvm *kvm,
>  			  const struct kvm_irq_routing_entry *ue)
>  {
>  	int ret;
> +	u64 uaddr;
>  
>  	switch (ue->type) {
>  	case KVM_IRQ_ROUTING_S390_ADAPTER:
>  		e->set = set_adapter_int;
> -		e->adapter.summary_addr = ue->u.adapter.summary_addr;
> -		e->adapter.ind_addr = ue->u.adapter.ind_addr;
> +		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);
> +		e->adapter.summary_addr = uaddr;
> +		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.ind_addr);
> +		e->adapter.ind_addr = uaddr;
>  		e->adapter.summary_offset = ue->u.adapter.summary_offset;
>  		e->adapter.ind_offset = ue->u.adapter.ind_offset;
>  		e->adapter.adapter_id = ue->u.adapter.adapter_id;
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-11  9:23         ` [PATCH v2 RFC] " Christian Borntraeger
  2020-02-12 11:52           ` Christian Borntraeger
@ 2020-02-12 12:16           ` David Hildenbrand
  2020-02-12 12:22             ` Christian Borntraeger
  2020-02-12 12:39           ` Cornelia Huck
  2 siblings, 1 reply; 47+ messages in thread
From: David Hildenbrand @ 2020-02-12 12:16 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Ulrich.Weigand, aarcange, akpm, cohuck, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth, dgilbert


> +	/*
> +	 * We resolve the gpa to hva when setting the IRQ routing. If userspace
> +	 * decides to mess with the memslots it better also updates the irq
> +	 * routing. Otherwise we will write to the wrong userspace address.
> +	 */

I guess this is just as old handling, where a page was pinned. But
slightly better :) So the pages are definitely part of guest memory.

Fun stuff: If (a nasty) guest (in current code) zappes this page using
balloon inflation and the page is re-accessed (e.g., by the guest or by
the host), a new page will be faulted in, and there will be an
inconsistency between what the guest/user space sees and what this code
sees. Going via the user space address looks cleaner.

Now, with postcopy live migration, we will also zap all guest memory
before starting the guest, I do wonder if that produces a similar
inconsistency ... usually, when pages are pinned in the kernel, we
inhibit the balloon and implicitly also postcopy.

If so, this actually fixes an issue. But might depend on the order
things are initialized in user space. Or I am messing up things :)

[...]

>  static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
>  {
> -	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
> -	struct s390_map_info *map, *tmp;
> -	int found = 0;
> -
> -	if (!adapter || !addr)
> -		return -EINVAL;
> -
> -	down_write(&adapter->maps_lock);
> -	list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
> -		if (map->guest_addr == addr) {
> -			found = 1;
> -			atomic_dec(&adapter->nr_maps);
> -			list_del(&map->list);
> -			put_page(map->page);
> -			kfree(map);
> -			break;
> -		}
> -	}
> -	up_write(&adapter->maps_lock);
> -
> -	return found ? 0 : -EINVAL;
> +	return 0;

Can we get rid of this function?

>  }

> +static struct page *get_map_page(struct kvm *kvm,
> +				 struct s390_io_adapter *adapter,
> +				 u64 uaddr)
>  {
> -	struct s390_map_info *map;
> +	struct page *page;
> +	int ret;
>  
>  	if (!adapter)
>  		return NULL;
> -
> -	list_for_each_entry(map, &adapter->maps, list) {
> -		if (map->guest_addr == addr)
> -			return map;
> -	}
> -	return NULL;
> +	page = NULL;

struct page *page = NULL;

> +	if (!uaddr)
> +		return NULL;
> +	down_read(&kvm->mm->mmap_sem);
> +	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
> +				    &page, NULL, NULL);
> +	if (ret < 1)
> +		page = NULL;

Is that really necessary? According to the doc, pinned pages are stored
to the array.  ret < 1 means "no pages" were pinned, so nothing should
be stored.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-12 12:16           ` David Hildenbrand
@ 2020-02-12 12:22             ` Christian Borntraeger
  2020-02-12 12:47               ` David Hildenbrand
  0 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-12 12:22 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Ulrich.Weigand, aarcange, akpm, cohuck, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth, dgilbert



On 12.02.20 13:16, David Hildenbrand wrote:
> 
>> +	/*
>> +	 * We resolve the gpa to hva when setting the IRQ routing. If userspace
>> +	 * decides to mess with the memslots it better also updates the irq
>> +	 * routing. Otherwise we will write to the wrong userspace address.
>> +	 */
> 
> I guess this is just as old handling, where a page was pinned. But
> slightly better :) So the pages are definitely part of guest memory.
> 
> Fun stuff: If (a nasty) guest (in current code) zappes this page using
> balloon inflation and the page is re-accessed (e.g., by the guest or by
> the host), a new page will be faulted in, and there will be an
> inconsistency between what the guest/user space sees and what this code
> sees. Going via the user space address looks cleaner.
> 
> Now, with postcopy live migration, we will also zap all guest memory
> before starting the guest, I do wonder if that produces a similar
> inconsistency ... usually, when pages are pinned in the kernel, we
> inhibit the balloon and implicitly also postcopy.
> 
> If so, this actually fixes an issue. But might depend on the order
> things are initialized in user space. Or I am messing up things :)

Yes, the current code has some corner cases where a guest can shoot himself
in the foot. This variant could actually be safer. 
> 
> [...]
> 
>>  static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
>>  {
>> -	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
>> -	struct s390_map_info *map, *tmp;
>> -	int found = 0;
>> -
>> -	if (!adapter || !addr)
>> -		return -EINVAL;
>> -
>> -	down_write(&adapter->maps_lock);
>> -	list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
>> -		if (map->guest_addr == addr) {
>> -			found = 1;
>> -			atomic_dec(&adapter->nr_maps);
>> -			list_del(&map->list);
>> -			put_page(map->page);
>> -			kfree(map);
>> -			break;
>> -		}
>> -	}
>> -	up_write(&adapter->maps_lock);
>> -
>> -	return found ? 0 : -EINVAL;
>> +	return 0;
> 
> Can we get rid of this function?

And do a return in the handler? maybe yes. Will have a look.
> 
>>  }
> 
>> +static struct page *get_map_page(struct kvm *kvm,
>> +				 struct s390_io_adapter *adapter,
>> +				 u64 uaddr)
>>  {
>> -	struct s390_map_info *map;
>> +	struct page *page;
>> +	int ret;
>>  
>>  	if (!adapter)
>>  		return NULL;
>> -
>> -	list_for_each_entry(map, &adapter->maps, list) {
>> -		if (map->guest_addr == addr)
>> -			return map;
>> -	}
>> -	return NULL;
>> +	page = NULL;
> 
> struct page *page = NULL;
> 
>> +	if (!uaddr)
>> +		return NULL;
>> +	down_read(&kvm->mm->mmap_sem);
>> +	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
>> +				    &page, NULL, NULL);
>> +	if (ret < 1)
>> +		page = NULL;
> 
> Is that really necessary? According to the doc, pinned pages are stored
> to the array.  ret < 1 means "no pages" were pinned, so nothing should
> be stored.

Probably. Will have a look.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-11  9:23         ` [PATCH v2 RFC] " Christian Borntraeger
  2020-02-12 11:52           ` Christian Borntraeger
  2020-02-12 12:16           ` David Hildenbrand
@ 2020-02-12 12:39           ` Cornelia Huck
  2020-02-12 12:44             ` Christian Borntraeger
  2 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2020-02-12 12:39 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: david, Ulrich.Weigand, aarcange, akpm, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth

On Tue, 11 Feb 2020 04:23:41 -0500
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> From: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> 
> The adapter interrupt page containing the indicator bits is currently
> pinned. That means that a guest with many devices can pin a lot of
> memory pages in the host. This also complicates the reference tracking
> which is needed for memory management handling of protected virtual
> machines.
> We can simply try to get the userspace page set the bits and free the
> page. By storing the userspace address in the irq routing entry instead
> of the guest address we can actually avoid many lookups and list walks
> so that this variant is very likely not slower.
> 
> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> [borntraeger@de.ibm.com: patch simplification]
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
> quick and dirty, how this could look like
> 
> 
>  arch/s390/include/asm/kvm_host.h |   3 -
>  arch/s390/kvm/interrupt.c        | 146 +++++++++++--------------------
>  2 files changed, 49 insertions(+), 100 deletions(-)
> 

(...)

> @@ -2488,83 +2485,26 @@ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked)
>  
>  static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
>  {
> -	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
> -	struct s390_map_info *map;
> -	int ret;
> -
> -	if (!adapter || !addr)
> -		return -EINVAL;
> -
> -	map = kzalloc(sizeof(*map), GFP_KERNEL);
> -	if (!map) {
> -		ret = -ENOMEM;
> -		goto out;
> -	}
> -	INIT_LIST_HEAD(&map->list);
> -	map->guest_addr = addr;
> -	map->addr = gmap_translate(kvm->arch.gmap, addr);
> -	if (map->addr == -EFAULT) {
> -		ret = -EFAULT;
> -		goto out;
> -	}
> -	ret = get_user_pages_fast(map->addr, 1, FOLL_WRITE, &map->page);
> -	if (ret < 0)
> -		goto out;
> -	BUG_ON(ret != 1);
> -	down_write(&adapter->maps_lock);
> -	if (atomic_inc_return(&adapter->nr_maps) < MAX_S390_ADAPTER_MAPS) {
> -		list_add_tail(&map->list, &adapter->maps);
> -		ret = 0;
> -	} else {
> -		put_page(map->page);
> -		ret = -EINVAL;
> +	/*
> +	 * We resolve the gpa to hva when setting the IRQ routing. If userspace
> +	 * decides to mess with the memslots it better also updates the irq
> +	 * routing. Otherwise we will write to the wrong userspace address.
> +	 */
> +	return 0;

Given that this function now always returns 0, we basically get a
completely useless roundtrip into the kernel when userspace is trying
to setup the mappings.

Can we define a new IO_ADAPTER_MAPPING_NOT_NEEDED or so capability that
userspace can check?

This change in behaviour probably wants a change in the documentation
as well.

>  	}
> -	up_write(&adapter->maps_lock);
> -out:
> -	if (ret)
> -		kfree(map);
> -	return ret;
> -}
>  
>  static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
>  {
> -	struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
> -	struct s390_map_info *map, *tmp;
> -	int found = 0;
> -
> -	if (!adapter || !addr)
> -		return -EINVAL;
> -
> -	down_write(&adapter->maps_lock);
> -	list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
> -		if (map->guest_addr == addr) {
> -			found = 1;
> -			atomic_dec(&adapter->nr_maps);
> -			list_del(&map->list);
> -			put_page(map->page);
> -			kfree(map);
> -			break;
> -		}
> -	}
> -	up_write(&adapter->maps_lock);
> -
> -	return found ? 0 : -EINVAL;
> +	return 0;

Same here.

>  }
>  
>  void kvm_s390_destroy_adapters(struct kvm *kvm)
>  {
>  	int i;
> -	struct s390_map_info *map, *tmp;
>  
>  	for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
>  		if (!kvm->arch.adapters[i])
>  			continue;
> -		list_for_each_entry_safe(map, tmp,
> -					 &kvm->arch.adapters[i]->maps, list) {
> -			list_del(&map->list);
> -			put_page(map->page);
> -			kfree(map);
> -		}
>  		kfree(kvm->arch.adapters[i]);

Call kfree() unconditionally?

>  	}
>  }
> @@ -2831,19 +2771,25 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
>  	return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
>  }
>  
> -static struct s390_map_info *get_map_info(struct s390_io_adapter *adapter,
> -					  u64 addr)
> +static struct page *get_map_page(struct kvm *kvm,
> +				 struct s390_io_adapter *adapter,
> +				 u64 uaddr)
>  {
> -	struct s390_map_info *map;
> +	struct page *page;
> +	int ret;
>  
>  	if (!adapter)
>  		return NULL;
> -
> -	list_for_each_entry(map, &adapter->maps, list) {
> -		if (map->guest_addr == addr)
> -			return map;
> -	}
> -	return NULL;
> +	page = NULL;
> +	if (!uaddr)
> +		return NULL;
> +	down_read(&kvm->mm->mmap_sem);
> +	ret = get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE,
> +				    &page, NULL, NULL);
> +	if (ret < 1)
> +		page = NULL;
> +	up_read(&kvm->mm->mmap_sem);
> +	return page;
>  }
>  
>  static int adapter_indicators_set(struct kvm *kvm,

(...)

> @@ -2951,12 +2900,15 @@ int kvm_set_routing_entry(struct kvm *kvm,
>  			  const struct kvm_irq_routing_entry *ue)
>  {
>  	int ret;
> +	u64 uaddr;
>  
>  	switch (ue->type) {
>  	case KVM_IRQ_ROUTING_S390_ADAPTER:
>  		e->set = set_adapter_int;
> -		e->adapter.summary_addr = ue->u.adapter.summary_addr;
> -		e->adapter.ind_addr = ue->u.adapter.ind_addr;
> +		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);

Can gmap_translate() return -EFAULT here? The code above only seems to
check for 0... do we want to return an error here?

> +		e->adapter.summary_addr = uaddr;
> +		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.ind_addr);
> +		e->adapter.ind_addr = uaddr;
>  		e->adapter.summary_offset = ue->u.adapter.summary_offset;
>  		e->adapter.ind_offset = ue->u.adapter.ind_offset;
>  		e->adapter.adapter_id = ue->u.adapter.adapter_id;



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-12 12:39           ` Cornelia Huck
@ 2020-02-12 12:44             ` Christian Borntraeger
  2020-02-12 13:07               ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-12 12:44 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: david, Ulrich.Weigand, aarcange, akpm, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth



On 12.02.20 13:39, Cornelia Huck wrote:
[...]

>> +	 */
>> +	return 0;
> 
> Given that this function now always returns 0, we basically get a
> completely useless roundtrip into the kernel when userspace is trying
> to setup the mappings.
> 
> Can we define a new IO_ADAPTER_MAPPING_NOT_NEEDED or so capability that
> userspace can check?

Nack. This is one system call per initial indicator ccw. This is so seldom
and cheap that I do not see a point in optimizing this. 


> This change in behaviour probably wants a change in the documentation
> as well.

Yep. 
[...]

>> @@ -2951,12 +2900,15 @@ int kvm_set_routing_entry(struct kvm *kvm,
>>  			  const struct kvm_irq_routing_entry *ue)
>>  {
>>  	int ret;
>> +	u64 uaddr;
>>  
>>  	switch (ue->type) {
>>  	case KVM_IRQ_ROUTING_S390_ADAPTER:
>>  		e->set = set_adapter_int;
>> -		e->adapter.summary_addr = ue->u.adapter.summary_addr;
>> -		e->adapter.ind_addr = ue->u.adapter.ind_addr;
>> +		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);
> 
> Can gmap_translate() return -EFAULT here? The code above only seems to
> check for 0... do we want to return an error here?

Yes.

> 
>> +		e->adapter.summary_addr = uaddr;
>> +		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.ind_addr);
>> +		e->adapter.ind_addr = uaddr;
>>  		e->adapter.summary_offset = ue->u.adapter.summary_offset;
>>  		e->adapter.ind_offset = ue->u.adapter.ind_offset;
>>  		e->adapter.adapter_id = ue->u.adapter.adapter_id;
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-12 12:22             ` Christian Borntraeger
@ 2020-02-12 12:47               ` David Hildenbrand
  0 siblings, 0 replies; 47+ messages in thread
From: David Hildenbrand @ 2020-02-12 12:47 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Ulrich.Weigand, aarcange, akpm, cohuck, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth, dgilbert

On 12.02.20 13:22, Christian Borntraeger wrote:
> 
> 
> On 12.02.20 13:16, David Hildenbrand wrote:
>>
>>> +	/*
>>> +	 * We resolve the gpa to hva when setting the IRQ routing. If userspace
>>> +	 * decides to mess with the memslots it better also updates the irq
>>> +	 * routing. Otherwise we will write to the wrong userspace address.
>>> +	 */
>>
>> I guess this is just as old handling, where a page was pinned. But
>> slightly better :) So the pages are definitely part of guest memory.
>>
>> Fun stuff: If (a nasty) guest (in current code) zappes this page using
>> balloon inflation and the page is re-accessed (e.g., by the guest or by
>> the host), a new page will be faulted in, and there will be an
>> inconsistency between what the guest/user space sees and what this code
>> sees. Going via the user space address looks cleaner.
>>
>> Now, with postcopy live migration, we will also zap all guest memory
>> before starting the guest, I do wonder if that produces a similar
>> inconsistency ... usually, when pages are pinned in the kernel, we
>> inhibit the balloon and implicitly also postcopy.
>>
>> If so, this actually fixes an issue. But might depend on the order
>> things are initialized in user space. Or I am messing up things :)
> 
> Yes, the current code has some corner cases where a guest can shoot himself
> in the foot. This variant could actually be safer. 

At least with postcopy it would be a silent migration issue, not guest
triggered. But I am not sure if it can trigger.

Anyhow, this is safer :)

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 RFC] KVM: s390/interrupt: do not pin adapter interrupt pages
  2020-02-12 12:44             ` Christian Borntraeger
@ 2020-02-12 13:07               ` Cornelia Huck
  0 siblings, 0 replies; 47+ messages in thread
From: Cornelia Huck @ 2020-02-12 13:07 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: david, Ulrich.Weigand, aarcange, akpm, frankja, gor, imbrenda,
	kvm, linux-mm, linux-s390, mimu, thuth

On Wed, 12 Feb 2020 13:44:53 +0100
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 12.02.20 13:39, Cornelia Huck wrote:
> [...]
> 
> >> +	 */
> >> +	return 0;  
> > 
> > Given that this function now always returns 0, we basically get a
> > completely useless roundtrip into the kernel when userspace is trying
> > to setup the mappings.
> > 
> > Can we define a new IO_ADAPTER_MAPPING_NOT_NEEDED or so capability that
> > userspace can check?  
> 
> Nack. This is one system call per initial indicator ccw. This is so seldom
> and cheap that I do not see a point in optimizing this. 

NB that zpci also calls this. Probably a rare event there as well.

> 
> 
> > This change in behaviour probably wants a change in the documentation
> > as well.  
> 
> Yep. 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests
  2020-02-07 11:39 ` [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests Christian Borntraeger
@ 2020-02-12 13:42   ` Cornelia Huck
  2020-02-13  7:43     ` Christian Borntraeger
  2020-02-14 17:59   ` David Hildenbrand
  1 sibling, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2020-02-12 13:42 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, KVM, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

On Fri,  7 Feb 2020 06:39:28 -0500
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> 
> This provides the basic ultravisor calls and page table handling to cope
> with secure guests:
> - provide arch_make_page_accessible
> - make pages accessible after unmapping of secure guests
> - provide the ultravisor commands convert to/from secure
> - provide the ultravisor commands pin/unpin shared
> - provide callbacks to make pages secure (inacccessible)
>  - we check for the expected pin count to only make pages secure if the
>    host is not accessing them
>  - we fence hugetlbfs for secure pages
> 
> Co-developed-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  arch/s390/include/asm/gmap.h        |   2 +
>  arch/s390/include/asm/mmu.h         |   2 +
>  arch/s390/include/asm/mmu_context.h |   1 +
>  arch/s390/include/asm/page.h        |   5 +
>  arch/s390/include/asm/pgtable.h     |  34 +++++-
>  arch/s390/include/asm/uv.h          |  52 +++++++++
>  arch/s390/kernel/uv.c               | 172 ++++++++++++++++++++++++++++
>  7 files changed, 263 insertions(+), 5 deletions(-)

(...)

> +/*
> + * Requests the Ultravisor to encrypt a guest page and make it
> + * accessible to the host for paging (export).
> + *
> + * @paddr: Absolute host address of page to be exported
> + */
> +int uv_convert_from_secure(unsigned long paddr)
> +{
> +	struct uv_cb_cfs uvcb = {
> +		.header.cmd = UVC_CMD_CONV_FROM_SEC_STOR,
> +		.header.len = sizeof(uvcb),
> +		.paddr = paddr
> +	};
> +
> +	uv_call(0, (u64)&uvcb);
> +
> +	if (uvcb.header.rc == 1 || uvcb.header.rc == 0x107)

I think this either wants a comment or some speaking #defines.

> +		return 0;
> +	return -EINVAL;
> +}

(...)



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests
  2020-02-12 13:42   ` Cornelia Huck
@ 2020-02-13  7:43     ` Christian Borntraeger
  2020-02-13  8:44       ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-13  7:43 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Janosch Frank, KVM, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton



On 12.02.20 14:42, Cornelia Huck wrote:
> On Fri,  7 Feb 2020 06:39:28 -0500
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>
>> This provides the basic ultravisor calls and page table handling to cope
>> with secure guests:
>> - provide arch_make_page_accessible
>> - make pages accessible after unmapping of secure guests
>> - provide the ultravisor commands convert to/from secure
>> - provide the ultravisor commands pin/unpin shared
>> - provide callbacks to make pages secure (inacccessible)
>>  - we check for the expected pin count to only make pages secure if the
>>    host is not accessing them
>>  - we fence hugetlbfs for secure pages
>>
>> Co-developed-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
>> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
>> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
>> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>> ---
>>  arch/s390/include/asm/gmap.h        |   2 +
>>  arch/s390/include/asm/mmu.h         |   2 +
>>  arch/s390/include/asm/mmu_context.h |   1 +
>>  arch/s390/include/asm/page.h        |   5 +
>>  arch/s390/include/asm/pgtable.h     |  34 +++++-
>>  arch/s390/include/asm/uv.h          |  52 +++++++++
>>  arch/s390/kernel/uv.c               | 172 ++++++++++++++++++++++++++++
>>  7 files changed, 263 insertions(+), 5 deletions(-)
> 
> (...)
> 
>> +/*
>> + * Requests the Ultravisor to encrypt a guest page and make it
>> + * accessible to the host for paging (export).
>> + *
>> + * @paddr: Absolute host address of page to be exported
>> + */
>> +int uv_convert_from_secure(unsigned long paddr)
>> +{
>> +	struct uv_cb_cfs uvcb = {
>> +		.header.cmd = UVC_CMD_CONV_FROM_SEC_STOR,
>> +		.header.len = sizeof(uvcb),
>> +		.paddr = paddr
>> +	};
>> +
>> +	uv_call(0, (u64)&uvcb);
>> +
>> +	if (uvcb.header.rc == 1 || uvcb.header.rc == 0x107)
> 
> I think this either wants a comment or some speaking #defines.

Yes. We will improve some other aspects of this patch, but I will add

	/* Return on success or if this page was already exported */
> 
>> +		return 0;
>> +	return -EINVAL;
>> +}
> 
> (...)
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests
  2020-02-13  7:43     ` Christian Borntraeger
@ 2020-02-13  8:44       ` Cornelia Huck
  0 siblings, 0 replies; 47+ messages in thread
From: Cornelia Huck @ 2020-02-13  8:44 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, KVM, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

On Thu, 13 Feb 2020 08:43:33 +0100
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 12.02.20 14:42, Cornelia Huck wrote:
> > On Fri,  7 Feb 2020 06:39:28 -0500
> > Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> >   
> >> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> >>
> >> This provides the basic ultravisor calls and page table handling to cope
> >> with secure guests:
> >> - provide arch_make_page_accessible
> >> - make pages accessible after unmapping of secure guests
> >> - provide the ultravisor commands convert to/from secure
> >> - provide the ultravisor commands pin/unpin shared
> >> - provide callbacks to make pages secure (inacccessible)
> >>  - we check for the expected pin count to only make pages secure if the
> >>    host is not accessing them
> >>  - we fence hugetlbfs for secure pages
> >>
> >> Co-developed-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> >> Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
> >> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> >> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
> >> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> >> ---
> >>  arch/s390/include/asm/gmap.h        |   2 +
> >>  arch/s390/include/asm/mmu.h         |   2 +
> >>  arch/s390/include/asm/mmu_context.h |   1 +
> >>  arch/s390/include/asm/page.h        |   5 +
> >>  arch/s390/include/asm/pgtable.h     |  34 +++++-
> >>  arch/s390/include/asm/uv.h          |  52 +++++++++
> >>  arch/s390/kernel/uv.c               | 172 ++++++++++++++++++++++++++++
> >>  7 files changed, 263 insertions(+), 5 deletions(-)  
> > 
> > (...)
> >   
> >> +/*
> >> + * Requests the Ultravisor to encrypt a guest page and make it
> >> + * accessible to the host for paging (export).
> >> + *
> >> + * @paddr: Absolute host address of page to be exported
> >> + */
> >> +int uv_convert_from_secure(unsigned long paddr)
> >> +{
> >> +	struct uv_cb_cfs uvcb = {
> >> +		.header.cmd = UVC_CMD_CONV_FROM_SEC_STOR,
> >> +		.header.len = sizeof(uvcb),
> >> +		.paddr = paddr
> >> +	};
> >> +
> >> +	uv_call(0, (u64)&uvcb);
> >> +
> >> +	if (uvcb.header.rc == 1 || uvcb.header.rc == 0x107)  
> > 
> > I think this either wants a comment or some speaking #defines.  
> 
> Yes. We will improve some other aspects of this patch, but I will add
> 
> 	/* Return on success or if this page was already exported */

Sounds good.

> >   
> >> +		return 0;
> >> +	return -EINVAL;
> >> +}  
> > 
> > (...)
> >   
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-11 11:26     ` Will Deacon
  2020-02-11 11:43       ` Christian Borntraeger
@ 2020-02-13 14:48       ` Christian Borntraeger
  2020-02-18 16:02         ` Will Deacon
  1 sibling, 1 reply; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-13 14:48 UTC (permalink / raw)
  To: Will Deacon
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Sean Christopherson,
	Tom Lendacky, KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini,
	mark.rutland, qperret, palmerdabbelt



On 11.02.20 12:26, Will Deacon wrote:
> On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
>> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
>> use for this on KVM/ARM in the future?
> 
> I can't speak for Marc, but I can say that we're interested in something
> like this for potentially isolating VMs from a KVM host in Android.
> However, we've currently been working on the assumption that the memory
> removed from the host won't usually be touched by the host (i.e. no
> KSM or swapping out), so all we'd probably want at the moment is to be
> able to return an error back from arch_make_page_accessible(). Its return
> code is ignored in this patch :/

I think there are two ways at the moment. One is to keep the memory away from
Linux, e.g. by using the memory as device driver memory like kmalloc. This is
kind of what Power does. And I understand you as you want to follow that model
and do not want to use paging, file backing or so.
Our approach tries to fully integrate into the existing Linux LRU methods.

Back to your approach. What happens when a malicious QEMU would start direct I/O
on such isolated memory? Is that what you meant by adding error checking in these
hooks. For the gup.c code returning an error seems straightforward.

I have no idea what to do in writeback. When somebody managed to trigger writeback
on such a page, it already seems too late. 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-10 17:27   ` Christian Borntraeger
  2020-02-11 11:26     ` Will Deacon
@ 2020-02-13 19:56     ` Sean Christopherson
  2020-02-13 20:13       ` Christian Borntraeger
  1 sibling, 1 reply; 47+ messages in thread
From: Sean Christopherson @ 2020-02-13 19:56 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Tom Lendacky, KVM,
	Cornelia Huck, David Hildenbrand, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini

On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
> use for this on KVM/ARM in the future?
> 
> CC Sean Christopherson/Tom Lendacky. Any obvious use case for Intel/AMD
> to have a callback before a page is used for I/O?

Yes?

> Andrew (or other mm people) any chance to get an ACK for this change?
> I could then carry that via s390 or KVM tree. Or if you want to carry
> that yourself I can send an updated version (we need to kind of 
> synchronize that Linus will pull the KVM changes after the mm changes).
> 
> Andrea asked if others would benefit from this, so here are some more
> information about this (and I can also put this into the patch
> description).  So we have talked to the POWER folks. They do not use
> the standard normal memory management, instead they have a hard split
> between secure and normal memory. The secure memory  is the handled by
> the hypervisor as device memory and the ultravisor and the hypervisor
> move this forth and back when needed.
> 
> On s390 there is no *separate* pool of physical pages that are secure.
> Instead, *any* physical page can be marked as secure or not, by
> setting a bit in a per-page data structure that hardware uses to stop
> unauthorized access.  (That bit is under control of the ultravisor.)
> 
> Note that one side effect of this strategy is that the decision
> *which* secure pages to encrypt and then swap out is actually done by
> the hypervisor, not the ultravisor.  In our case, the hypervisor is
> Linux/KVM, so we're using the regular Linux memory management scheme
> (active/inactive LRU lists etc.) to make this decision.  The advantage
> is that the Ultravisor code does not need to itself implement any
> memory management code, making it a lot simpler.

Disclaimer: I'm not familiar with s390 guest page faults or UV.  I tried
to give myself a crash course, apologies if I'm way out in left field...

AIUI, pages will first be added to a secure guest by converting a normal,
non-secure page to secure and stuffing it into the guest page tables.  To
swap a page from a secure guest, arch_make_page_accessible() will be called
to encrypt the page in place so that it can be accessed by the untrusted
kernel/VMM and written out to disk.  And to fault the page back in, on s390
a secure guest access to a non-secure page will generate a page fault with
a dedicated type.  That fault routes directly to
do_non_secure_storage_access(), which converts the page to secure and thus
makes it re-accessible to the guest.

That all sounds sane and usable for Intel.

My big question is the follow/get flows, more on that below.

> However, in the end this is why we need the hook into Linux memory
> management: once Linux has decided to swap a page out, we need to get
> a chance to tell the Ultravisor to "export" the page (i.e., encrypt
> its contents and mark it no longer secure).
> 
> As outlined below this should be a no-op for anybody not opting in.
> 
> Christian                                   
> 
> On 07.02.20 12:39, Christian Borntraeger wrote:
> > From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > 
> > With the introduction of protected KVM guests on s390 there is now a
> > concept of inaccessible pages. These pages need to be made accessible
> > before the host can access them.
> > 
> > While cpu accesses will trigger a fault that can be resolved, I/O
> > accesses will just fail.  We need to add a callback into architecture
> > code for places that will do I/O, namely when writeback is started or
> > when a page reference is taken.
> > 
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> > ---
> >  include/linux/gfp.h | 6 ++++++
> >  mm/gup.c            | 2 ++
> >  mm/page-writeback.c | 1 +
> >  3 files changed, 9 insertions(+)
> > 
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index e5b817cb86e7..be2754841369 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -485,6 +485,12 @@ static inline void arch_free_page(struct page *page, int order) { }
> >  #ifndef HAVE_ARCH_ALLOC_PAGE
> >  static inline void arch_alloc_page(struct page *page, int order) { }
> >  #endif
> > +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
> > +static inline int arch_make_page_accessible(struct page *page)
> > +{
> > +	return 0;
> > +}
> > +#endif
> >  
> >  struct page *
> >  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 7646bf993b25..a01262cd2821 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -257,6 +257,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
> >  			page = ERR_PTR(-ENOMEM);
> >  			goto out;
> >  		}
> > +		arch_make_page_accessible(page);

As Will pointed out, the return value definitely needs to be checked, there
will undoubtedly be scenarios where the page cannot be made accessible.

What is the use case for calling arch_make_page_accessible() in the follow()
and gup() paths?  Live migration is the only thing that comes to mind, and
for live migration I would expect you would want to keep the secure guest
running when copying pages to the target, i.e. use pre-copy.  That would
conflict with converting the page in place.  Rather, migration would use a
separate dedicated path to copy the encrypted contents of the secure page to
a completely different page, and send *that* across the wire so that the
guest can continue accessing the original page.

Am I missing a need to do this for the swap/reclaim case?  Or is there a
completely different use case I'm overlooking?

Tangentially related, hooks here could be quite useful for sanity checking
the kernel/KVM and/or debugging kernel/KVM bugs.  Would it make sense to
pass a param to arch_make_page_accessible() to provide some information as
to why the page needs to be made accessible?

> >  	}
> >  	if (flags & FOLL_TOUCH) {
> >  		if ((flags & FOLL_WRITE) &&
> > @@ -1870,6 +1871,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> >  
> >  		VM_BUG_ON_PAGE(compound_head(page) != head, page);
> >  
> > +		arch_make_page_accessible(page);
> >  		SetPageReferenced(page);
> >  		pages[*nr] = page;
> >  		(*nr)++;
> > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > index 2caf780a42e7..0f0bd14571b1 100644
> > --- a/mm/page-writeback.c
> > +++ b/mm/page-writeback.c
> > @@ -2806,6 +2806,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
> >  		inc_lruvec_page_state(page, NR_WRITEBACK);
> >  		inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
> >  	}
> > +	arch_make_page_accessible(page);
> >  	unlock_page_memcg(page);
> 
> As outlined by Ulrich, we can move the callback after the unlock.
> 
> >  	return ret;
> >  
> > 
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-13 19:56     ` Sean Christopherson
@ 2020-02-13 20:13       ` Christian Borntraeger
  2020-02-13 20:46         ` Sean Christopherson
  2020-02-17 20:55         ` Tom Lendacky
  0 siblings, 2 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-13 20:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Tom Lendacky, KVM,
	Cornelia Huck, David Hildenbrand, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini



On 13.02.20 20:56, Sean Christopherson wrote:
> On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
>> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
>> use for this on KVM/ARM in the future?
>>
>> CC Sean Christopherson/Tom Lendacky. Any obvious use case for Intel/AMD
>> to have a callback before a page is used for I/O?
> 
> Yes?
> 
>> Andrew (or other mm people) any chance to get an ACK for this change?
>> I could then carry that via s390 or KVM tree. Or if you want to carry
>> that yourself I can send an updated version (we need to kind of 
>> synchronize that Linus will pull the KVM changes after the mm changes).
>>
>> Andrea asked if others would benefit from this, so here are some more
>> information about this (and I can also put this into the patch
>> description).  So we have talked to the POWER folks. They do not use
>> the standard normal memory management, instead they have a hard split
>> between secure and normal memory. The secure memory  is the handled by
>> the hypervisor as device memory and the ultravisor and the hypervisor
>> move this forth and back when needed.
>>
>> On s390 there is no *separate* pool of physical pages that are secure.
>> Instead, *any* physical page can be marked as secure or not, by
>> setting a bit in a per-page data structure that hardware uses to stop
>> unauthorized access.  (That bit is under control of the ultravisor.)
>>
>> Note that one side effect of this strategy is that the decision
>> *which* secure pages to encrypt and then swap out is actually done by
>> the hypervisor, not the ultravisor.  In our case, the hypervisor is
>> Linux/KVM, so we're using the regular Linux memory management scheme
>> (active/inactive LRU lists etc.) to make this decision.  The advantage
>> is that the Ultravisor code does not need to itself implement any
>> memory management code, making it a lot simpler.
> 
> Disclaimer: I'm not familiar with s390 guest page faults or UV.  I tried
> to give myself a crash course, apologies if I'm way out in left field...
> 
> AIUI, pages will first be added to a secure guest by converting a normal,
> non-secure page to secure and stuffing it into the guest page tables.  To
> swap a page from a secure guest, arch_make_page_accessible() will be called
> to encrypt the page in place so that it can be accessed by the untrusted
> kernel/VMM and written out to disk.  And to fault the page back in, on s390
> a secure guest access to a non-secure page will generate a page fault with
> a dedicated type.  That fault routes directly to
> do_non_secure_storage_access(), which converts the page to secure and thus
> makes it re-accessible to the guest.
> 
> That all sounds sane and usable for Intel.
> 
> My big question is the follow/get flows, more on that below.
> 
>> However, in the end this is why we need the hook into Linux memory
>> management: once Linux has decided to swap a page out, we need to get
>> a chance to tell the Ultravisor to "export" the page (i.e., encrypt
>> its contents and mark it no longer secure).
>>
>> As outlined below this should be a no-op for anybody not opting in.
>>
>> Christian                                   
>>
>> On 07.02.20 12:39, Christian Borntraeger wrote:
>>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>>
>>> With the introduction of protected KVM guests on s390 there is now a
>>> concept of inaccessible pages. These pages need to be made accessible
>>> before the host can access them.
>>>
>>> While cpu accesses will trigger a fault that can be resolved, I/O
>>> accesses will just fail.  We need to add a callback into architecture
>>> code for places that will do I/O, namely when writeback is started or
>>> when a page reference is taken.
>>>
>>> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>> ---
>>>  include/linux/gfp.h | 6 ++++++
>>>  mm/gup.c            | 2 ++
>>>  mm/page-writeback.c | 1 +
>>>  3 files changed, 9 insertions(+)
>>>
>>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>>> index e5b817cb86e7..be2754841369 100644
>>> --- a/include/linux/gfp.h
>>> +++ b/include/linux/gfp.h
>>> @@ -485,6 +485,12 @@ static inline void arch_free_page(struct page *page, int order) { }
>>>  #ifndef HAVE_ARCH_ALLOC_PAGE
>>>  static inline void arch_alloc_page(struct page *page, int order) { }
>>>  #endif
>>> +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
>>> +static inline int arch_make_page_accessible(struct page *page)
>>> +{
>>> +	return 0;
>>> +}
>>> +#endif
>>>  
>>>  struct page *
>>>  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
>>> diff --git a/mm/gup.c b/mm/gup.c
>>> index 7646bf993b25..a01262cd2821 100644
>>> --- a/mm/gup.c
>>> +++ b/mm/gup.c
>>> @@ -257,6 +257,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>>>  			page = ERR_PTR(-ENOMEM);
>>>  			goto out;
>>>  		}
>>> +		arch_make_page_accessible(page);
> 
> As Will pointed out, the return value definitely needs to be checked, there
> will undoubtedly be scenarios where the page cannot be made accessible.

Actually onm s390 this should always succeed unless we have a bug.

But we can certainly provide a variant of that patch that does check the return
value. 
Proper error handling for gup and WARN_ON for pae-writeback.
> 
> What is the use case for calling arch_make_page_accessible() in the follow()
> and gup() paths?  Live migration is the only thing that comes to mind, and
> for live migration I would expect you would want to keep the secure guest
> running when copying pages to the target, i.e. use pre-copy.  That would
> conflict with converting the page in place.  Rather, migration would use a
> separate dedicated path to copy the encrypted contents of the secure page to
> a completely different page, and send *that* across the wire so that the
> guest can continue accessing the original page.
> Am I missing a need to do this for the swap/reclaim case?  Or is there a
> completely different use case I'm overlooking?

This is actually to protect the host against a malicious user space. For 
example a bad QEMU could simply start direct I/O on such protected memory.
We do not want userspace to be able to trigger I/O errors and thus we
implemented the logic to "whenever somebody accesses that page (gup) or
doing I/O, make sure that this page can be accessed. When the guest tries
to access that page we will wait in the page fault handler for writeback to
have finished and for the page_ref to be the expected value.



> 
> Tangentially related, hooks here could be quite useful for sanity checking
> the kernel/KVM and/or debugging kernel/KVM bugs.  Would it make sense to
> pass a param to arch_make_page_accessible() to provide some information as
> to why the page needs to be made accessible?

Some kind of enum that can be used optionally to optimize things?

> 
>>>  	}
>>>  	if (flags & FOLL_TOUCH) {
>>>  		if ((flags & FOLL_WRITE) &&
>>> @@ -1870,6 +1871,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>>>  
>>>  		VM_BUG_ON_PAGE(compound_head(page) != head, page);
>>>  
>>> +		arch_make_page_accessible(page);
>>>  		SetPageReferenced(page);
>>>  		pages[*nr] = page;
>>>  		(*nr)++;
>>> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
>>> index 2caf780a42e7..0f0bd14571b1 100644
>>> --- a/mm/page-writeback.c
>>> +++ b/mm/page-writeback.c
>>> @@ -2806,6 +2806,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
>>>  		inc_lruvec_page_state(page, NR_WRITEBACK);
>>>  		inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
>>>  	}
>>> +	arch_make_page_accessible(page);
>>>  	unlock_page_memcg(page);
>>
>> As outlined by Ulrich, we can move the callback after the unlock.
>>
>>>  	return ret;
>>>  
>>>
>>



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-13 20:13       ` Christian Borntraeger
@ 2020-02-13 20:46         ` Sean Christopherson
  2020-02-17 20:55         ` Tom Lendacky
  1 sibling, 0 replies; 47+ messages in thread
From: Sean Christopherson @ 2020-02-13 20:46 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Tom Lendacky, KVM,
	Cornelia Huck, David Hildenbrand, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini

On Thu, Feb 13, 2020 at 09:13:35PM +0100, Christian Borntraeger wrote:
> 
> On 13.02.20 20:56, Sean Christopherson wrote:
> > On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
> > Am I missing a need to do this for the swap/reclaim case?  Or is there a
> > completely different use case I'm overlooking?
> 
> This is actually to protect the host against a malicious user space. For 
> example a bad QEMU could simply start direct I/O on such protected memory.
> We do not want userspace to be able to trigger I/O errors and thus we
> implemented the logic to "whenever somebody accesses that page (gup) or
> doing I/O, make sure that this page can be accessed. When the guest tries
> to access that page we will wait in the page fault handler for writeback to
> have finished and for the page_ref to be the expected value.

Ah.  I was assuming the pages would unmappable by userspace, enforced by
some other mechanism

> > 
> > Tangentially related, hooks here could be quite useful for sanity checking
> > the kernel/KVM and/or debugging kernel/KVM bugs.  Would it make sense to
> > pass a param to arch_make_page_accessible() to provide some information as
> > to why the page needs to be made accessible?
> 
> Some kind of enum that can be used optionally to optimize things?

Not just optimize, in the case above it'd probably preferable for us to
reject a userspace mapping outright, e.g. return -EFAULT if called from
gup()/follow().  Debug scenarios might also require differentiating between
writeback and "other".


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests
  2020-02-07 11:39 ` [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests Christian Borntraeger
  2020-02-12 13:42   ` Cornelia Huck
@ 2020-02-14 17:59   ` David Hildenbrand
  2020-02-14 21:17     ` Christian Borntraeger
  1 sibling, 1 reply; 47+ messages in thread
From: David Hildenbrand @ 2020-02-14 17:59 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton

>  
>  /*
> @@ -1086,12 +1106,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>  					    unsigned long addr,
>  					    pte_t *ptep, int full)
>  {
> +	pte_t res;

Empty line missing.

>  	if (full) {
> -		pte_t pte = *ptep;
> +		res = *ptep;
>  		*ptep = __pte(_PAGE_INVALID);
> -		return pte;
> +	} else {
> +		res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
>  	}
> -	return ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
> +	if (mm_is_protected(mm) && pte_present(res))
> +		uv_convert_from_secure(pte_val(res) & PAGE_MASK);
> +	return res;
>  }

[...]

> +int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
> +int uv_convert_from_secure(unsigned long paddr);
> +
> +static inline int uv_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
> +{
> +	struct uv_cb_cts uvcb = {
> +		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
> +		.header.len = sizeof(uvcb),
> +		.guest_handle = gmap->guest_handle,
> +		.gaddr = gaddr,
> +	};
> +
> +	return uv_make_secure(gmap, gaddr, &uvcb);
> +}

I'd actually suggest to name everything that eats a gmap "gmap_",

e.g., "gmap_make_secure()"

[...]

>  
>  #if defined(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) ||                          \
> diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
> index a06a628a88da..15ac598a3d8d 100644
> --- a/arch/s390/kernel/uv.c
> +++ b/arch/s390/kernel/uv.c
> @@ -9,6 +9,8 @@
>  #include <linux/sizes.h>
>  #include <linux/bitmap.h>
>  #include <linux/memblock.h>
> +#include <linux/pagemap.h>
> +#include <linux/swap.h>
>  #include <asm/facility.h>
>  #include <asm/sections.h>
>  #include <asm/uv.h>
> @@ -99,4 +101,174 @@ void adjust_to_uv_max(unsigned long *vmax)
>  	if (prot_virt_host && *vmax > uv_info.max_sec_stor_addr)
>  		*vmax = uv_info.max_sec_stor_addr;
>  }
> +
> +static int __uv_pin_shared(unsigned long paddr)
> +{
> +	struct uv_cb_cfs uvcb = {
> +		.header.cmd	= UVC_CMD_PIN_PAGE_SHARED,
> +		.header.len	= sizeof(uvcb),
> +		.paddr		= paddr,

please drop all the superfluous spaces (just as in the other uv calls).

> +	};
> +
> +	if (uv_call(0, (u64)&uvcb))
> +		return -EINVAL;
> +	return 0;
> +}

[...]

> +static int make_secure_pte(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	struct conv_params *params = data;
> +	pte_t entry = READ_ONCE(*ptep);
> +	struct page *page;
> +	int expected, rc = 0;
> +
> +	if (!pte_present(entry))
> +		return -ENXIO;
> +	if (pte_val(entry) & (_PAGE_INVALID | _PAGE_PROTECT))
> +		return -ENXIO;
> +
> +	page = pte_page(entry);
> +	if (page != params->page)
> +		return -ENXIO;
> +
> +	if (PageWriteback(page))
> +		return -EAGAIN;
> +	expected = expected_page_refs(page);

I do wonder if we could factor out expected_page_refs() and reuse from
other sources ...

I do wonder about huge page backing of guests, and especially
hpage_nr_pages(page) used in mm/migrate.c:expected_page_refs(). But I
can spot some hugepage exclusion below ... This needs comments.

> +	if (!page_ref_freeze(page, expected))
> +		return -EBUSY;
> +	set_bit(PG_arch_1, &page->flags);

Can we please document somewhere how PG_arch_1 is used on s390x? (page)

"The generic code guarantees that this bit is cleared for a page when it
first is entered into the page cache" - should not be an issue, right?

> +	rc = uv_call(0, (u64)params->uvcb);
> +	page_ref_unfreeze(page, expected);
> +	if (rc)
> +		rc = (params->uvcb->rc == 0x10a) ? -ENXIO : -EINVAL;
> +	return rc;
> +}
> +
> +/*
> + * Requests the Ultravisor to make a page accessible to a guest.
> + * If it's brought in the first time, it will be cleared. If
> + * it has been exported before, it will be decrypted and integrity
> + * checked.
> + *
> + * @gmap: Guest mapping
> + * @gaddr: Guest 2 absolute address to be imported

I'd just drop the the (incomplete) parameter documentation, everybody
reaching this point should now what a gmap and what a gaddr is ...

> + */
> +int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
> +{
> +	struct conv_params params = { .uvcb = uvcb };
> +	struct vm_area_struct *vma;
> +	unsigned long uaddr;
> +	int rc, local_drain = 0;
> +
> +again:
> +	rc = -EFAULT;
> +	down_read(&gmap->mm->mmap_sem);
> +
> +	uaddr = __gmap_translate(gmap, gaddr);
> +	if (IS_ERR_VALUE(uaddr))
> +		goto out;
> +	vma = find_vma(gmap->mm, uaddr);
> +	if (!vma)
> +		goto out;
> +	if (is_vm_hugetlb_page(vma))
> +		goto out;

Hah there it is! How is it enforced on upper layers/excluded? Will
hpage=true fail with prot virt? What if a guest is not a protected guest
but wants to sue huge pages? This needs comments/patch description.

> +
> +	rc = -ENXIO;
> +	params.page = follow_page(vma, uaddr, FOLL_WRITE | FOLL_NOWAIT);
> +	if (IS_ERR_OR_NULL(params.page))
> +		goto out;
> +
> +	lock_page(params.page);
> +	rc = apply_to_page_range(gmap->mm, uaddr, PAGE_SIZE, make_secure_pte, &params);

Ehm, isn't it just always a single page?

> +	unlock_page(params.page);
> +out:
> +	up_read(&gmap->mm->mmap_sem);
> +
> +	if (rc == -EBUSY) {
> +		if (local_drain) {
> +			lru_add_drain_all();
> +			return -EAGAIN;
> +		}
> +		lru_add_drain();

comments please why that is performed.

> +		local_drain = 1;
> +		goto again;

Could we end up in an endless loop?

> +	} else if (rc == -ENXIO) {
> +		if (gmap_fault(gmap, gaddr, FAULT_FLAG_WRITE))
> +			return -EFAULT;
> +		return -EAGAIN;
> +	}
> +	return rc;
> +}
> +EXPORT_SYMBOL_GPL(uv_make_secure);
> +
> +/**
> + * To be called with the page locked or with an extra reference!
> + */
> +int arch_make_page_accessible(struct page *page)
> +{
> +	int rc = 0;
> +
> +	if (PageHuge(page))
> +		return 0;

Ah, another instance. Comment please why

> +
> +	if (!test_bit(PG_arch_1, &page->flags))
> +		return 0;

"Can you describe the meaning of this bit with three words"? Or a couple
more? :D

"once upon a time, the page was secure and still might be" ?
"the page is secure and therefore inaccessible" ?

> +
> +	rc = __uv_pin_shared(page_to_phys(page));
> +	if (!rc) {
> +		clear_bit(PG_arch_1, &page->flags);
> +		return 0;
> +	}
> +
> +	rc = uv_convert_from_secure(page_to_phys(page));
> +	if (!rc) {
> +		clear_bit(PG_arch_1, &page->flags);
> +		return 0;
> +	}
> +
> +	return rc;
> +}
> +EXPORT_SYMBOL_GPL(arch_make_page_accessible);
> +
>  #endif
> 

More code comments would be highly appreciated!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers
  2020-02-07 11:39 ` [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers Christian Borntraeger
@ 2020-02-14 18:05   ` David Hildenbrand
  2020-02-14 19:59     ` Christian Borntraeger
  0 siblings, 1 reply; 47+ messages in thread
From: David Hildenbrand @ 2020-02-14 18:05 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton, Janosch Frank

On 07.02.20 12:39, Christian Borntraeger wrote:
> From: Vasily Gorbik <gor@linux.ibm.com>
> 
> Add exceptions handlers performing transparent transition of non-secure
> pages to secure (import) upon guest access and secure pages to
> non-secure (export) upon hypervisor access.
> 
> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
> [frankja@linux.ibm.com: adding checks for failures]
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> [imbrenda@linux.ibm.com:  adding a check for gmap fault]
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  arch/s390/kernel/pgm_check.S |  4 +-
>  arch/s390/mm/fault.c         | 86 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 88 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/s390/kernel/pgm_check.S b/arch/s390/kernel/pgm_check.S
> index 59dee9d3bebf..27ac4f324c70 100644
> --- a/arch/s390/kernel/pgm_check.S
> +++ b/arch/s390/kernel/pgm_check.S
> @@ -78,8 +78,8 @@ PGM_CHECK(do_dat_exception)		/* 39 */
>  PGM_CHECK(do_dat_exception)		/* 3a */
>  PGM_CHECK(do_dat_exception)		/* 3b */
>  PGM_CHECK_DEFAULT			/* 3c */
> -PGM_CHECK_DEFAULT			/* 3d */
> -PGM_CHECK_DEFAULT			/* 3e */
> +PGM_CHECK(do_secure_storage_access)	/* 3d */
> +PGM_CHECK(do_non_secure_storage_access)	/* 3e */
>  PGM_CHECK_DEFAULT			/* 3f */
>  PGM_CHECK_DEFAULT			/* 40 */
>  PGM_CHECK_DEFAULT			/* 41 */
> diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> index 7b0bb475c166..fab4219fa0be 100644
> --- a/arch/s390/mm/fault.c
> +++ b/arch/s390/mm/fault.c
> @@ -38,6 +38,7 @@
>  #include <asm/irq.h>
>  #include <asm/mmu_context.h>
>  #include <asm/facility.h>
> +#include <asm/uv.h>
>  #include "../kernel/entry.h"
>  
>  #define __FAIL_ADDR_MASK -4096L
> @@ -816,3 +817,88 @@ static int __init pfault_irq_init(void)
>  early_initcall(pfault_irq_init);
>  
>  #endif /* CONFIG_PFAULT */
> +
> +#if IS_ENABLED(CONFIG_KVM)
> +void do_secure_storage_access(struct pt_regs *regs)
> +{
> +	unsigned long addr = regs->int_parm_long & __FAIL_ADDR_MASK;
> +	struct vm_area_struct *vma;
> +	struct mm_struct *mm;
> +	struct page *page;
> +	int rc;
> +
> +	switch (get_fault_type(regs)) {
> +	case USER_FAULT:
> +		mm = current->mm;
> +		down_read(&mm->mmap_sem);
> +		vma = find_vma(mm, addr);
> +		if (!vma) {
> +			up_read(&mm->mmap_sem);
> +			do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
> +			break;
> +		}
> +		page = follow_page(vma, addr, FOLL_WRITE | FOLL_GET);
> +		if (IS_ERR_OR_NULL(page)) {
> +			up_read(&mm->mmap_sem);
> +			break;
> +		}
> +		if (arch_make_page_accessible(page))
> +			send_sig(SIGSEGV, current, 0);
> +		put_page(page);
> +		up_read(&mm->mmap_sem);
> +		break;
> +	case KERNEL_FAULT:
> +		page = phys_to_page(addr);
> +		if (unlikely(!try_get_page(page)))
> +			break;
> +		rc = arch_make_page_accessible(page);
> +		put_page(page);
> +		if (rc)
> +			BUG();
> +		break;
> +	case VDSO_FAULT:
> +		/* fallthrough */
> +	case GMAP_FAULT:
> +		/* fallthrough */

Could we ever get here from the SIE?

> +	default:
> +		do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
> +		WARN_ON_ONCE(1);
> +	}
> +}
> +NOKPROBE_SYMBOL(do_secure_storage_access);
> +
> +void do_non_secure_storage_access(struct pt_regs *regs)
> +{
> +	unsigned long gaddr = regs->int_parm_long & __FAIL_ADDR_MASK;
> +	struct gmap *gmap = (struct gmap *)S390_lowcore.gmap;
> +	struct uv_cb_cts uvcb = {
> +		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
> +		.header.len = sizeof(uvcb),
> +		.guest_handle = gmap->guest_handle,
> +		.gaddr = gaddr,
> +	};
> +	int rc;
> +
> +	if (get_fault_type(regs) != GMAP_FAULT) {
> +		do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
> +		WARN_ON_ONCE(1);
> +		return;
> +	}
> +
> +	rc = uv_make_secure(gmap, gaddr, &uvcb);
> +	if (rc == -EINVAL && uvcb.header.rc != 0x104)
> +		send_sig(SIGSEGV, current, 0);


Looks good to me, but I don't feel like being ready for an r-b. I'll
have to let that sink in :)

Assumed-is-okay-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest
  2020-02-07 11:39 ` [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest Christian Borntraeger
@ 2020-02-14 18:40   ` David Hildenbrand
  0 siblings, 0 replies; 47+ messages in thread
From: David Hildenbrand @ 2020-02-14 18:40 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton

On 07.02.20 12:39, Christian Borntraeger wrote:
> Before we destroy the secure configuration, we better make all
> pages accessible again. This also happens during reboot, where we reboot
> into a non-secure guest that then can go again into secure mode. As
> this "new" secure guest will have a new ID we cannot reuse the old page
> state.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> Reviewed-by: Thomas Huth <thuth@redhat.com>
> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> ---
>  arch/s390/include/asm/pgtable.h |  1 +
>  arch/s390/kvm/pv.c              |  2 ++
>  arch/s390/mm/gmap.c             | 35 +++++++++++++++++++++++++++++++++
>  3 files changed, 38 insertions(+)
> 
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index dbd1453e6924..3e2ea997c334 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -1669,6 +1669,7 @@ extern int vmem_remove_mapping(unsigned long start, unsigned long size);
>  extern int s390_enable_sie(void);
>  extern int s390_enable_skey(void);
>  extern void s390_reset_cmma(struct mm_struct *mm);
> +extern void s390_reset_acc(struct mm_struct *mm);
>  
>  /* s390 has a private copy of get unmapped area to deal with cache synonyms */
>  #define HAVE_ARCH_UNMAPPED_AREA
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index 4795e61f4e16..392795a92bd9 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -66,6 +66,8 @@ int kvm_s390_pv_destroy_vm(struct kvm *kvm)
>  	int rc;
>  	u32 ret;
>  
> +	/* make all pages accessible before destroying the guest */
> +	s390_reset_acc(kvm->mm);
>  	rc = uv_cmd_nodata(kvm_s390_pv_handle(kvm),
>  			   UVC_CMD_DESTROY_SEC_CONF, &ret);
>  	WRITE_ONCE(kvm->arch.gmap->guest_handle, 0);
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 7291452fe5f0..27926a06df32 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2650,3 +2650,38 @@ void s390_reset_cmma(struct mm_struct *mm)
>  	up_write(&mm->mmap_sem);
>  }
>  EXPORT_SYMBOL_GPL(s390_reset_cmma);
> +
> +/*
> + * make inaccessible pages accessible again
> + */
> +static int __s390_reset_acc(pte_t *ptep, unsigned long addr,
> +			    unsigned long next, struct mm_walk *walk)
> +{
> +	pte_t pte = READ_ONCE(*ptep);
> +
> +	if (pte_present(pte))
> +		WARN_ON_ONCE(uv_convert_from_secure(pte_val(pte) & PAGE_MASK));
> +	return 0;
> +}
> +
> +static const struct mm_walk_ops reset_acc_walk_ops = {
> +	.pte_entry		= __s390_reset_acc,
> +};
> +
> +#include <linux/sched/mm.h>
> +void s390_reset_acc(struct mm_struct *mm)
> +{
> +	/*
> +	 * we might be called during
> +	 * reset:                             we walk the pages and clear
> +	 * close of all kvm file descriptors: we walk the pages and clear
> +	 * exit of process on fd closure:     vma already gone, do nothing
> +	 */
> +	if (!mmget_not_zero(mm))
> +		return;
> +	down_read(&mm->mmap_sem);
> +	walk_page_range(mm, 0, TASK_SIZE, &reset_acc_walk_ops, NULL);
> +	up_read(&mm->mmap_sem);
> +	mmput(mm);
> +}
> +EXPORT_SYMBOL_GPL(s390_reset_acc);
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers
  2020-02-14 18:05   ` David Hildenbrand
@ 2020-02-14 19:59     ` Christian Borntraeger
  0 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-14 19:59 UTC (permalink / raw)
  To: David Hildenbrand, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton, Janosch Frank



On 14.02.20 19:05, David Hildenbrand wrote:
> On 07.02.20 12:39, Christian Borntraeger wrote:
>> From: Vasily Gorbik <gor@linux.ibm.com>
>>
>> Add exceptions handlers performing transparent transition of non-secure
>> pages to secure (import) upon guest access and secure pages to
>> non-secure (export) upon hypervisor access.
>>
>> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
>> [frankja@linux.ibm.com: adding checks for failures]
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> [imbrenda@linux.ibm.com:  adding a check for gmap fault]
>> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
>> [borntraeger@de.ibm.com: patch merging, splitting, fixing]
>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>> ---
>>  arch/s390/kernel/pgm_check.S |  4 +-
>>  arch/s390/mm/fault.c         | 86 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 88 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/s390/kernel/pgm_check.S b/arch/s390/kernel/pgm_check.S
>> index 59dee9d3bebf..27ac4f324c70 100644
>> --- a/arch/s390/kernel/pgm_check.S
>> +++ b/arch/s390/kernel/pgm_check.S
>> @@ -78,8 +78,8 @@ PGM_CHECK(do_dat_exception)		/* 39 */
>>  PGM_CHECK(do_dat_exception)		/* 3a */
>>  PGM_CHECK(do_dat_exception)		/* 3b */
>>  PGM_CHECK_DEFAULT			/* 3c */
>> -PGM_CHECK_DEFAULT			/* 3d */
>> -PGM_CHECK_DEFAULT			/* 3e */
>> +PGM_CHECK(do_secure_storage_access)	/* 3d */
>> +PGM_CHECK(do_non_secure_storage_access)	/* 3e */
>>  PGM_CHECK_DEFAULT			/* 3f */
>>  PGM_CHECK_DEFAULT			/* 40 */
>>  PGM_CHECK_DEFAULT			/* 41 */
>> diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
>> index 7b0bb475c166..fab4219fa0be 100644
>> --- a/arch/s390/mm/fault.c
>> +++ b/arch/s390/mm/fault.c
>> @@ -38,6 +38,7 @@
>>  #include <asm/irq.h>
>>  #include <asm/mmu_context.h>
>>  #include <asm/facility.h>
>> +#include <asm/uv.h>
>>  #include "../kernel/entry.h"
>>  
>>  #define __FAIL_ADDR_MASK -4096L
>> @@ -816,3 +817,88 @@ static int __init pfault_irq_init(void)
>>  early_initcall(pfault_irq_init);
>>  
>>  #endif /* CONFIG_PFAULT */
>> +
>> +#if IS_ENABLED(CONFIG_KVM)
>> +void do_secure_storage_access(struct pt_regs *regs)
>> +{
>> +	unsigned long addr = regs->int_parm_long & __FAIL_ADDR_MASK;
>> +	struct vm_area_struct *vma;
>> +	struct mm_struct *mm;
>> +	struct page *page;
>> +	int rc;
>> +
>> +	switch (get_fault_type(regs)) {
>> +	case USER_FAULT:
>> +		mm = current->mm;
>> +		down_read(&mm->mmap_sem);
>> +		vma = find_vma(mm, addr);
>> +		if (!vma) {
>> +			up_read(&mm->mmap_sem);
>> +			do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
>> +			break;
>> +		}
>> +		page = follow_page(vma, addr, FOLL_WRITE | FOLL_GET);
>> +		if (IS_ERR_OR_NULL(page)) {
>> +			up_read(&mm->mmap_sem);
>> +			break;
>> +		}
>> +		if (arch_make_page_accessible(page))
>> +			send_sig(SIGSEGV, current, 0);
>> +		put_page(page);
>> +		up_read(&mm->mmap_sem);
>> +		break;
>> +	case KERNEL_FAULT:
>> +		page = phys_to_page(addr);
>> +		if (unlikely(!try_get_page(page)))
>> +			break;
>> +		rc = arch_make_page_accessible(page);
>> +		put_page(page);
>> +		if (rc)
>> +			BUG();
>> +		break;
>> +	case VDSO_FAULT:
>> +		/* fallthrough */
>> +	case GMAP_FAULT:
>> +		/* fallthrough */
> 
> Could we ever get here from the SIE?

GMAP_FAULT is only set if we came from the sie critical section, so unless we have a bug no.

> 
>> +	default:
>> +		do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
>> +		WARN_ON_ONCE(1);
>> +	}
>> +}
>> +NOKPROBE_SYMBOL(do_secure_storage_access);
>> +
>> +void do_non_secure_storage_access(struct pt_regs *regs)
>> +{
>> +	unsigned long gaddr = regs->int_parm_long & __FAIL_ADDR_MASK;
>> +	struct gmap *gmap = (struct gmap *)S390_lowcore.gmap;
>> +	struct uv_cb_cts uvcb = {
>> +		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
>> +		.header.len = sizeof(uvcb),
>> +		.guest_handle = gmap->guest_handle,
>> +		.gaddr = gaddr,
>> +	};
>> +	int rc;
>> +
>> +	if (get_fault_type(regs) != GMAP_FAULT) {
>> +		do_fault_error(regs, VM_READ | VM_WRITE, VM_FAULT_BADMAP);
>> +		WARN_ON_ONCE(1);
>> +		return;
>> +	}
>> +
>> +	rc = uv_make_secure(gmap, gaddr, &uvcb);
>> +	if (rc == -EINVAL && uvcb.header.rc != 0x104)
>> +		send_sig(SIGSEGV, current, 0);
> 
> 
> Looks good to me, but I don't feel like being ready for an r-b. I'll
> have to let that sink in :)
> 
> Assumed-is-okay-by: David Hildenbrand <david@redhat.com>
> 
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests
  2020-02-14 17:59   ` David Hildenbrand
@ 2020-02-14 21:17     ` Christian Borntraeger
  0 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-14 21:17 UTC (permalink / raw)
  To: David Hildenbrand, Janosch Frank
  Cc: KVM, Cornelia Huck, Thomas Huth, Ulrich Weigand,
	Claudio Imbrenda, Andrea Arcangeli, linux-s390, Michael Mueller,
	Vasily Gorbik, linux-mm, Andrew Morton

In general this patch has changed a lot, but several comments still apply

On 14.02.20 18:59, David Hildenbrand wrote:
>>  
>>  /*
>> @@ -1086,12 +1106,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>  					    unsigned long addr,
>>  					    pte_t *ptep, int full)
>>  {
>> +	pte_t res;
> 
> Empty line missing.

ack

> 
>>  	if (full) {
>> -		pte_t pte = *ptep;
>> +		res = *ptep;
>>  		*ptep = __pte(_PAGE_INVALID);
>> -		return pte;
>> +	} else {
>> +		res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
>>  	}
>> -	return ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
>> +	if (mm_is_protected(mm) && pte_present(res))
>> +		uv_convert_from_secure(pte_val(res) & PAGE_MASK);
>> +	return res;
>>  }
> 
> [...]
> 
>> +int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
>> +int uv_convert_from_secure(unsigned long paddr);
>> +
>> +static inline int uv_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
>> +{
>> +	struct uv_cb_cts uvcb = {
>> +		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
>> +		.header.len = sizeof(uvcb),
>> +		.guest_handle = gmap->guest_handle,
>> +		.gaddr = gaddr,
>> +	};
>> +
>> +	return uv_make_secure(gmap, gaddr, &uvcb);
>> +}
> 
> I'd actually suggest to name everything that eats a gmap "gmap_",
> 
> e.g., "gmap_make_secure()"
> 
> [...]

ack.

> 
>>  
>>  #if defined(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) ||                          \
>> diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
>> index a06a628a88da..15ac598a3d8d 100644
>> --- a/arch/s390/kernel/uv.c
>> +++ b/arch/s390/kernel/uv.c
>> @@ -9,6 +9,8 @@
>>  #include <linux/sizes.h>
>>  #include <linux/bitmap.h>
>>  #include <linux/memblock.h>
>> +#include <linux/pagemap.h>
>> +#include <linux/swap.h>
>>  #include <asm/facility.h>
>>  #include <asm/sections.h>
>>  #include <asm/uv.h>
>> @@ -99,4 +101,174 @@ void adjust_to_uv_max(unsigned long *vmax)
>>  	if (prot_virt_host && *vmax > uv_info.max_sec_stor_addr)
>>  		*vmax = uv_info.max_sec_stor_addr;
>>  }
>> +
>> +static int __uv_pin_shared(unsigned long paddr)
>> +{
>> +	struct uv_cb_cfs uvcb = {
>> +		.header.cmd	= UVC_CMD_PIN_PAGE_SHARED,
>> +		.header.len	= sizeof(uvcb),
>> +		.paddr		= paddr,
> 
> please drop all the superfluous spaces (just as in the other uv calls).

ack

> 
>> +	};
>> +
>> +	if (uv_call(0, (u64)&uvcb))
>> +		return -EINVAL;
>> +	return 0;
>> +}
> 
> [...]
> 
>> +static int make_secure_pte(pte_t *ptep, unsigned long addr, void *data)
>> +{
>> +	struct conv_params *params = data;
>> +	pte_t entry = READ_ONCE(*ptep);
>> +	struct page *page;
>> +	int expected, rc = 0;
>> +
>> +	if (!pte_present(entry))
>> +		return -ENXIO;
>> +	if (pte_val(entry) & (_PAGE_INVALID | _PAGE_PROTECT))
>> +		return -ENXIO;
>> +
>> +	page = pte_page(entry);
>> +	if (page != params->page)
>> +		return -ENXIO;
>> +
>> +	if (PageWriteback(page))
>> +		return -EAGAIN;
>> +	expected = expected_page_refs(page);
> 
> I do wonder if we could factor out expected_page_refs() and reuse from
> other sources ...
> 
> I do wonder about huge page backing of guests, and especially
> hpage_nr_pages(page) used in mm/migrate.c:expected_page_refs(). But I
> can spot some hugepage exclusion below ... This needs comments.

Yes, we looked into several places and ALL places do their own math with their
own side conditions. There is no single function that accounts all possible
conditions and I am not going to start that now given the review bandwidth of
the mm tree.

I will add:
/*
 * Calculate the expected ref_count for a page that would otherwise have no
 * further pins. This was cribbed from similar functions in other places in
 * the kernel, but with some slight modifications. We know that a secure
 * page can not be a huge page for example.
 */
to expected page count

and something to the hugetlb check.




> 
>> +	if (!page_ref_freeze(page, expected))
>> +		return -EBUSY;
>> +	set_bit(PG_arch_1, &page->flags);
> 
> Can we please document somewhere how PG_arch_1 is used on s390x? (page)
> 
> "The generic code guarantees that this bit is cleared for a page when it
> first is entered into the page cache" - should not be an issue, right?

Right
> 
>> +	rc = uv_call(0, (u64)params->uvcb);
>> +	page_ref_unfreeze(page, expected);
>> +	if (rc)
>> +		rc = (params->uvcb->rc == 0x10a) ? -ENXIO : -EINVAL;
>> +	return rc;
>> +}
>> +
>> +/*
>> + * Requests the Ultravisor to make a page accessible to a guest.
>> + * If it's brought in the first time, it will be cleared. If
>> + * it has been exported before, it will be decrypted and integrity
>> + * checked.
>> + *
>> + * @gmap: Guest mapping
>> + * @gaddr: Guest 2 absolute address to be imported
> 
> I'd just drop the the (incomplete) parameter documentation, everybody
> reaching this point should now what a gmap and what a gaddr is ...

ack.
> 
>> + */
>> +int uv_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
>> +{
>> +	struct conv_params params = { .uvcb = uvcb };
>> +	struct vm_area_struct *vma;
>> +	unsigned long uaddr;
>> +	int rc, local_drain = 0;
>> +
>> +again:
>> +	rc = -EFAULT;
>> +	down_read(&gmap->mm->mmap_sem);
>> +
>> +	uaddr = __gmap_translate(gmap, gaddr);
>> +	if (IS_ERR_VALUE(uaddr))
>> +		goto out;
>> +	vma = find_vma(gmap->mm, uaddr);
>> +	if (!vma)
>> +		goto out;
>> +	if (is_vm_hugetlb_page(vma))
>> +		goto out;
> 
> Hah there it is! How is it enforced on upper layers/excluded? Will
> hpage=true fail with prot virt? What if a guest is not a protected guest
> but wants to sue huge pages? This needs comments/patch description.

will add

        /*
         * Secure pages cannot be huge and userspace should not combine both.
         * In case userspace does it anyway this will result in an -EFAULT for
         * the unpack. The guest is thus never reaching secure mode. If
         * userspace is playing dirty tricky with mapping huge pages later
         * on this will result in a segmenation fault.
         */


> 
>> +
>> +	rc = -ENXIO;
>> +	params.page = follow_page(vma, uaddr, FOLL_WRITE | FOLL_NOWAIT);
>> +	if (IS_ERR_OR_NULL(params.page))
>> +		goto out;
>> +
>> +	lock_page(params.page);
>> +	rc = apply_to_page_range(gmap->mm, uaddr, PAGE_SIZE, make_secure_pte, &params);
> 
> Ehm, isn't it just always a single page?

Yes, already fixed.

> 
>> +	unlock_page(params.page);
>> +out:
>> +	up_read(&gmap->mm->mmap_sem);
>> +
>> +	if (rc == -EBUSY) {
>> +		if (local_drain) {
>> +			lru_add_drain_all();
>> +			return -EAGAIN;
>> +		}
>> +		lru_add_drain();
> 
> comments please why that is performed.

done

> 
>> +		local_drain = 1;
[..]

>> +
>> +	if (PageHuge(page))
>> +		return 0;
> 
> Ah, another instance. Comment please why
> 
>> +
>> +	if (!test_bit(PG_arch_1, &page->flags))
>> +		return 0;
> 
> "Can you describe the meaning of this bit with three words"? Or a couple
> more? :D
> 
> "once upon a time, the page was secure and still might be" ?
> "the page is secure and therefore inaccessible" ?


        /*
         * PG_arch_1 is used in 3 places:
         * 1. for kernel page tables during early boot
         * 2. for storage keys of huge pages and KVM
         * 3. As an indication that this page might be secure. This can
         *    overindicate, e.g. we set the bit before calling
         *    convert_to_secure.
         * As secure pages are never huge, all 3 variants can co-exists.
         */

> 
>> +
>> +	rc = __uv_pin_shared(page_to_phys(page));
>> +	if (!rc) {
>> +		clear_bit(PG_arch_1, &page->flags);
>> +		return 0;
>> +	}
>> +
>> +	rc = uv_convert_from_secure(page_to_phys(page));
>> +	if (!rc) {
>> +		clear_bit(PG_arch_1, &page->flags);
>> +		return 0;
>> +	}
>> +
>> +	return rc;
>> +}
>> +EXPORT_SYMBOL_GPL(arch_make_page_accessible);
>> +
>>  #endif
>>
> 
> More code comments would be highly appreciated!
> 
done



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-13 20:13       ` Christian Borntraeger
  2020-02-13 20:46         ` Sean Christopherson
@ 2020-02-17 20:55         ` Tom Lendacky
  2020-02-17 21:14           ` Christian Borntraeger
  1 sibling, 1 reply; 47+ messages in thread
From: Tom Lendacky @ 2020-02-17 20:55 UTC (permalink / raw)
  To: Christian Borntraeger, Sean Christopherson
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, KVM, Cornelia Huck,
	David Hildenbrand, Thomas Huth, Ulrich Weigand, Claudio Imbrenda,
	Andrea Arcangeli, linux-s390, Michael Mueller, Vasily Gorbik,
	linux-mm, kvm-ppc, Paolo Bonzini

On 2/13/20 2:13 PM, Christian Borntraeger wrote:
> 
> 
> On 13.02.20 20:56, Sean Christopherson wrote:
>> On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
>>> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
>>> use for this on KVM/ARM in the future?
>>>
>>> CC Sean Christopherson/Tom Lendacky. Any obvious use case for Intel/AMD
>>> to have a callback before a page is used for I/O?

From an SEV-SNP perspective, I don't think so. The SEV-SNP architecture
uses page states and having the hypervisor change the state from beneath
the guest might trigger the guest into thinking it's being attacked vs
just allowing the I/O to fail. Is this a concern with flooding the console
with I/O error messages?

>>
>> Yes?
>>
>>> Andrew (or other mm people) any chance to get an ACK for this change?
>>> I could then carry that via s390 or KVM tree. Or if you want to carry
>>> that yourself I can send an updated version (we need to kind of 
>>> synchronize that Linus will pull the KVM changes after the mm changes).
>>>
>>> Andrea asked if others would benefit from this, so here are some more
>>> information about this (and I can also put this into the patch
>>> description).  So we have talked to the POWER folks. They do not use
>>> the standard normal memory management, instead they have a hard split
>>> between secure and normal memory. The secure memory  is the handled by
>>> the hypervisor as device memory and the ultravisor and the hypervisor
>>> move this forth and back when needed.
>>>
>>> On s390 there is no *separate* pool of physical pages that are secure.
>>> Instead, *any* physical page can be marked as secure or not, by
>>> setting a bit in a per-page data structure that hardware uses to stop
>>> unauthorized access.  (That bit is under control of the ultravisor.)
>>>
>>> Note that one side effect of this strategy is that the decision
>>> *which* secure pages to encrypt and then swap out is actually done by
>>> the hypervisor, not the ultravisor.  In our case, the hypervisor is
>>> Linux/KVM, so we're using the regular Linux memory management scheme
>>> (active/inactive LRU lists etc.) to make this decision.  The advantage
>>> is that the Ultravisor code does not need to itself implement any
>>> memory management code, making it a lot simpler.
>>
>> Disclaimer: I'm not familiar with s390 guest page faults or UV.  I tried
>> to give myself a crash course, apologies if I'm way out in left field...
>>
>> AIUI, pages will first be added to a secure guest by converting a normal,
>> non-secure page to secure and stuffing it into the guest page tables.  To
>> swap a page from a secure guest, arch_make_page_accessible() will be called
>> to encrypt the page in place so that it can be accessed by the untrusted
>> kernel/VMM and written out to disk.  And to fault the page back in, on s390
>> a secure guest access to a non-secure page will generate a page fault with
>> a dedicated type.  That fault routes directly to
>> do_non_secure_storage_access(), which converts the page to secure and thus
>> makes it re-accessible to the guest.
>>
>> That all sounds sane and usable for Intel.
>>
>> My big question is the follow/get flows, more on that below.
>>
>>> However, in the end this is why we need the hook into Linux memory
>>> management: once Linux has decided to swap a page out, we need to get
>>> a chance to tell the Ultravisor to "export" the page (i.e., encrypt
>>> its contents and mark it no longer secure).
>>>
>>> As outlined below this should be a no-op for anybody not opting in.
>>>
>>> Christian                                   
>>>
>>> On 07.02.20 12:39, Christian Borntraeger wrote:
>>>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>>>
>>>> With the introduction of protected KVM guests on s390 there is now a
>>>> concept of inaccessible pages. These pages need to be made accessible
>>>> before the host can access them.
>>>>
>>>> While cpu accesses will trigger a fault that can be resolved, I/O
>>>> accesses will just fail.  We need to add a callback into architecture
>>>> code for places that will do I/O, namely when writeback is started or
>>>> when a page reference is taken.
>>>>
>>>> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>>> ---
>>>>  include/linux/gfp.h | 6 ++++++
>>>>  mm/gup.c            | 2 ++
>>>>  mm/page-writeback.c | 1 +
>>>>  3 files changed, 9 insertions(+)
>>>>
>>>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>>>> index e5b817cb86e7..be2754841369 100644
>>>> --- a/include/linux/gfp.h
>>>> +++ b/include/linux/gfp.h
>>>> @@ -485,6 +485,12 @@ static inline void arch_free_page(struct page *page, int order) { }
>>>>  #ifndef HAVE_ARCH_ALLOC_PAGE
>>>>  static inline void arch_alloc_page(struct page *page, int order) { }
>>>>  #endif
>>>> +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
>>>> +static inline int arch_make_page_accessible(struct page *page)
>>>> +{
>>>> +	return 0;
>>>> +}
>>>> +#endif
>>>>  
>>>>  struct page *
>>>>  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
>>>> diff --git a/mm/gup.c b/mm/gup.c
>>>> index 7646bf993b25..a01262cd2821 100644
>>>> --- a/mm/gup.c
>>>> +++ b/mm/gup.c
>>>> @@ -257,6 +257,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>>>>  			page = ERR_PTR(-ENOMEM);
>>>>  			goto out;
>>>>  		}
>>>> +		arch_make_page_accessible(page);
>>
>> As Will pointed out, the return value definitely needs to be checked, there
>> will undoubtedly be scenarios where the page cannot be made accessible.
> 
> Actually onm s390 this should always succeed unless we have a bug.
> 
> But we can certainly provide a variant of that patch that does check the return
> value. 
> Proper error handling for gup and WARN_ON for pae-writeback.
>>
>> What is the use case for calling arch_make_page_accessible() in the follow()
>> and gup() paths?  Live migration is the only thing that comes to mind, and
>> for live migration I would expect you would want to keep the secure guest
>> running when copying pages to the target, i.e. use pre-copy.  That would
>> conflict with converting the page in place.  Rather, migration would use a
>> separate dedicated path to copy the encrypted contents of the secure page to
>> a completely different page, and send *that* across the wire so that the
>> guest can continue accessing the original page.
>> Am I missing a need to do this for the swap/reclaim case?  Or is there a
>> completely different use case I'm overlooking?
> 
> This is actually to protect the host against a malicious user space. For 
> example a bad QEMU could simply start direct I/O on such protected memory.
> We do not want userspace to be able to trigger I/O errors and thus we
> implemented the logic to "whenever somebody accesses that page (gup) or
> doing I/O, make sure that this page can be accessed. When the guest tries
> to access that page we will wait in the page fault handler for writeback to
> have finished and for the page_ref to be the expected value.

So in this case, when the guest tries to access the page, the page may now
be corrupted because I/O was allowed to be done to it? Or will the I/O
have been blocked in some way, but without generating the I/O error?

Thanks,
Tom

> 
> 
> 
>>
>> Tangentially related, hooks here could be quite useful for sanity checking
>> the kernel/KVM and/or debugging kernel/KVM bugs.  Would it make sense to
>> pass a param to arch_make_page_accessible() to provide some information as
>> to why the page needs to be made accessible?
> 
> Some kind of enum that can be used optionally to optimize things?
> 
>>
>>>>  	}
>>>>  	if (flags & FOLL_TOUCH) {
>>>>  		if ((flags & FOLL_WRITE) &&
>>>> @@ -1870,6 +1871,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>>>>  
>>>>  		VM_BUG_ON_PAGE(compound_head(page) != head, page);
>>>>  
>>>> +		arch_make_page_accessible(page);
>>>>  		SetPageReferenced(page);
>>>>  		pages[*nr] = page;
>>>>  		(*nr)++;
>>>> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
>>>> index 2caf780a42e7..0f0bd14571b1 100644
>>>> --- a/mm/page-writeback.c
>>>> +++ b/mm/page-writeback.c
>>>> @@ -2806,6 +2806,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
>>>>  		inc_lruvec_page_state(page, NR_WRITEBACK);
>>>>  		inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
>>>>  	}
>>>> +	arch_make_page_accessible(page);
>>>>  	unlock_page_memcg(page);
>>>
>>> As outlined by Ulrich, we can move the callback after the unlock.
>>>
>>>>  	return ret;
>>>>  
>>>>
>>>
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-17 20:55         ` Tom Lendacky
@ 2020-02-17 21:14           ` Christian Borntraeger
  0 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-17 21:14 UTC (permalink / raw)
  To: Tom Lendacky, Sean Christopherson
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, KVM, Cornelia Huck,
	David Hildenbrand, Thomas Huth, Ulrich Weigand, Claudio Imbrenda,
	Andrea Arcangeli, linux-s390, Michael Mueller, Vasily Gorbik,
	linux-mm, kvm-ppc, Paolo Bonzini



On 17.02.20 21:55, Tom Lendacky wrote:
[...]

>>> What is the use case for calling arch_make_page_accessible() in the follow()
>>> and gup() paths?  Live migration is the only thing that comes to mind, and
>>> for live migration I would expect you would want to keep the secure guest
>>> running when copying pages to the target, i.e. use pre-copy.  That would
>>> conflict with converting the page in place.  Rather, migration would use a
>>> separate dedicated path to copy the encrypted contents of the secure page to
>>> a completely different page, and send *that* across the wire so that the
>>> guest can continue accessing the original page.
>>> Am I missing a need to do this for the swap/reclaim case?  Or is there a
>>> completely different use case I'm overlooking?
>>
>> This is actually to protect the host against a malicious user space. For 
>> example a bad QEMU could simply start direct I/O on such protected memory.
>> We do not want userspace to be able to trigger I/O errors and thus we
>> implemented the logic to "whenever somebody accesses that page (gup) or
>> doing I/O, make sure that this page can be accessed. When the guest tries
>> to access that page we will wait in the page fault handler for writeback to
>> have finished and for the page_ref to be the expected value.
> 
> So in this case, when the guest tries to access the page, the page may now
> be corrupted because I/O was allowed to be done to it? Or will the I/O
> have been blocked in some way, but without generating the I/O error?

No the I/O would be blocked by the hardware. Thats why we encrypt and export
the page for I/O usage. As soon as the refcount drops to the expected value
the guest can access its (unchanged) content after the import. the import
would check the hash etc. so no corruption of the guest state in any case.
(apart from denial of service, which is always possible)
If we would not have these hooks a malicious user could trigger I/O (which 
would be blocked) but the blocked I/O would generate an I/O error. And this
could bring trouble to some device drivers. And we want to avoid that.

In other words: the hardware/firmware will ensure guest integrity.But host
integrity (kernel vs userspace) must be enforced by the host kernel as usual
and this is one part of it.

But thanks for the clarification that you do not need those hooks.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
  2020-02-10 17:27   ` Christian Borntraeger
  2020-02-10 18:17   ` David Hildenbrand
@ 2020-02-18  3:36   ` Tian, Kevin
  2020-02-18  6:44     ` Christian Borntraeger
  2 siblings, 1 reply; 47+ messages in thread
From: Tian, Kevin @ 2020-02-18  3:36 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton

> From: Christian Borntraeger
> Sent: Friday, February 7, 2020 7:39 PM
> 
> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
> 
> With the introduction of protected KVM guests on s390 there is now a
> concept of inaccessible pages. These pages need to be made accessible
> before the host can access them.
> 
> While cpu accesses will trigger a fault that can be resolved, I/O
> accesses will just fail.  We need to add a callback into architecture
> code for places that will do I/O, namely when writeback is started or
> when a page reference is taken.

What about hooking the callback to DMA API ops?

> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  include/linux/gfp.h | 6 ++++++
>  mm/gup.c            | 2 ++
>  mm/page-writeback.c | 1 +
>  3 files changed, 9 insertions(+)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index e5b817cb86e7..be2754841369 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -485,6 +485,12 @@ static inline void arch_free_page(struct page *page,
> int order) { }
>  #ifndef HAVE_ARCH_ALLOC_PAGE
>  static inline void arch_alloc_page(struct page *page, int order) { }
>  #endif
> +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
> +static inline int arch_make_page_accessible(struct page *page)
> +{
> +	return 0;
> +}
> +#endif
> 
>  struct page *
>  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int
> preferred_nid,
> diff --git a/mm/gup.c b/mm/gup.c
> index 7646bf993b25..a01262cd2821 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -257,6 +257,7 @@ static struct page *follow_page_pte(struct
> vm_area_struct *vma,
>  			page = ERR_PTR(-ENOMEM);
>  			goto out;
>  		}
> +		arch_make_page_accessible(page);
>  	}
>  	if (flags & FOLL_TOUCH) {
>  		if ((flags & FOLL_WRITE) &&
> @@ -1870,6 +1871,7 @@ static int gup_pte_range(pmd_t pmd, unsigned
> long addr, unsigned long end,
> 
>  		VM_BUG_ON_PAGE(compound_head(page) != head, page);
> 
> +		arch_make_page_accessible(page);
>  		SetPageReferenced(page);
>  		pages[*nr] = page;
>  		(*nr)++;
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 2caf780a42e7..0f0bd14571b1 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2806,6 +2806,7 @@ int __test_set_page_writeback(struct page *page,
> bool keep_write)
>  		inc_lruvec_page_state(page, NR_WRITEBACK);
>  		inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
>  	}
> +	arch_make_page_accessible(page);
>  	unlock_page_memcg(page);
>  	return ret;
> 
> --
> 2.24.0



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-18  3:36   ` Tian, Kevin
@ 2020-02-18  6:44     ` Christian Borntraeger
  0 siblings, 0 replies; 47+ messages in thread
From: Christian Borntraeger @ 2020-02-18  6:44 UTC (permalink / raw)
  To: Tian, Kevin, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, Andrew Morton


On 18.02.20 04:36, Tian, Kevin wrote:
>> From: Christian Borntraeger
>> Sent: Friday, February 7, 2020 7:39 PM
>>
>> From: Claudio Imbrenda <imbrenda@linux.ibm.com>
>>
>> With the introduction of protected KVM guests on s390 there is now a
>> concept of inaccessible pages. These pages need to be made accessible
>> before the host can access them.
>>
>> While cpu accesses will trigger a fault that can be resolved, I/O
>> accesses will just fail.  We need to add a callback into architecture
>> code for places that will do I/O, namely when writeback is started or
>> when a page reference is taken.
> 
> What about hooking the callback to DMA API ops?

Not all device drivers do use the DMA API so it wont work for us.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages
  2020-02-13 14:48       ` Christian Borntraeger
@ 2020-02-18 16:02         ` Will Deacon
  0 siblings, 0 replies; 47+ messages in thread
From: Will Deacon @ 2020-02-18 16:02 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, Andrew Morton, Marc Zyngier, Sean Christopherson,
	Tom Lendacky, KVM, Cornelia Huck, David Hildenbrand, Thomas Huth,
	Ulrich Weigand, Claudio Imbrenda, Andrea Arcangeli, linux-s390,
	Michael Mueller, Vasily Gorbik, linux-mm, kvm-ppc, Paolo Bonzini,
	mark.rutland, qperret, palmerdabbelt

On Thu, Feb 13, 2020 at 03:48:16PM +0100, Christian Borntraeger wrote:
> 
> 
> On 11.02.20 12:26, Will Deacon wrote:
> > On Mon, Feb 10, 2020 at 06:27:04PM +0100, Christian Borntraeger wrote:
> >> CC Marc Zyngier for KVM on ARM.  Marc, see below. Will there be any
> >> use for this on KVM/ARM in the future?
> > 
> > I can't speak for Marc, but I can say that we're interested in something
> > like this for potentially isolating VMs from a KVM host in Android.
> > However, we've currently been working on the assumption that the memory
> > removed from the host won't usually be touched by the host (i.e. no
> > KSM or swapping out), so all we'd probably want at the moment is to be
> > able to return an error back from arch_make_page_accessible(). Its return
> > code is ignored in this patch :/
> 
> I think there are two ways at the moment. One is to keep the memory away from
> Linux, e.g. by using the memory as device driver memory like kmalloc. This is
> kind of what Power does. And I understand you as you want to follow that model
> and do not want to use paging, file backing or so.

Correct.

> Our approach tries to fully integrate into the existing Linux LRU methods.
> 
> Back to your approach. What happens when a malicious QEMU would start direct I/O
> on such isolated memory? Is that what you meant by adding error checking in these
> hooks. For the gup.c code returning an error seems straightforward.

Yes, it would be nice if the host could avoid even trying to access the
page if it's inaccessible and so returning an error from
arch_make_page_accessible() would be a good way to achieve that. If the
access goes ahead anyway, then the hypervisor will have to handle the
fault and effectively ignore the host access (writes will be lost, reads
will return poison).

> I have no idea what to do in writeback. When somebody managed to trigger writeback
> on such a page, it already seems too late.

For now, we could just have a BUG_ON().

Will


^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2020-02-18 16:03 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-07 11:39 [PATCH 00/35] KVM: s390: Add support for protected VMs Christian Borntraeger
2020-02-07 11:39 ` [PATCH 01/35] mm:gup/writeback: add callbacks for inaccessible pages Christian Borntraeger
2020-02-10 17:27   ` Christian Borntraeger
2020-02-11 11:26     ` Will Deacon
2020-02-11 11:43       ` Christian Borntraeger
2020-02-13 14:48       ` Christian Borntraeger
2020-02-18 16:02         ` Will Deacon
2020-02-13 19:56     ` Sean Christopherson
2020-02-13 20:13       ` Christian Borntraeger
2020-02-13 20:46         ` Sean Christopherson
2020-02-17 20:55         ` Tom Lendacky
2020-02-17 21:14           ` Christian Borntraeger
2020-02-10 18:17   ` David Hildenbrand
2020-02-10 18:28     ` Christian Borntraeger
2020-02-10 18:43       ` David Hildenbrand
2020-02-10 18:51         ` Christian Borntraeger
2020-02-18  3:36   ` Tian, Kevin
2020-02-18  6:44     ` Christian Borntraeger
2020-02-07 11:39 ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages Christian Borntraeger
2020-02-10 12:26   ` David Hildenbrand
2020-02-10 18:38     ` Christian Borntraeger
2020-02-10 19:33       ` David Hildenbrand
2020-02-11  9:23         ` [PATCH v2 RFC] " Christian Borntraeger
2020-02-12 11:52           ` Christian Borntraeger
2020-02-12 12:16           ` David Hildenbrand
2020-02-12 12:22             ` Christian Borntraeger
2020-02-12 12:47               ` David Hildenbrand
2020-02-12 12:39           ` Cornelia Huck
2020-02-12 12:44             ` Christian Borntraeger
2020-02-12 13:07               ` Cornelia Huck
2020-02-10 18:56     ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt Ulrich Weigand
2020-02-10 12:40   ` [PATCH 02/35] KVM: s390/interrupt: do not pin adapter interrupt pages David Hildenbrand
2020-02-07 11:39 ` [PATCH 05/35] s390/mm: provide memory management functions for protected KVM guests Christian Borntraeger
2020-02-12 13:42   ` Cornelia Huck
2020-02-13  7:43     ` Christian Borntraeger
2020-02-13  8:44       ` Cornelia Huck
2020-02-14 17:59   ` David Hildenbrand
2020-02-14 21:17     ` Christian Borntraeger
2020-02-07 11:39 ` [PATCH 06/35] s390/mm: add (non)secure page access exceptions handlers Christian Borntraeger
2020-02-14 18:05   ` David Hildenbrand
2020-02-14 19:59     ` Christian Borntraeger
2020-02-07 11:39 ` [PATCH 10/35] KVM: s390: protvirt: Secure memory is not mergeable Christian Borntraeger
2020-02-07 11:39 ` [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest Christian Borntraeger
2020-02-14 18:40   ` David Hildenbrand
2020-02-07 11:39 ` [PATCH 21/35] KVM: s390/mm: handle guest unpin events Christian Borntraeger
2020-02-10 14:58   ` Thomas Huth
2020-02-11 13:21     ` Cornelia Huck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).