kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] s390: virtio: support protected virtualization
@ 2019-05-23 16:22 Michael Mueller
  2019-05-23 16:22 ` [PATCH v2 1/8] s390/mm: force swiotlb for " Michael Mueller
                   ` (7 more replies)
  0 siblings, 8 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

Enhanced virtualization protection technology may require the use of
bounce buffers for I/O. While support for this was built into the virtio
core, virtio-ccw wasn't changed accordingly.

Some background on technology (not part of this series) and the
terminology used.

* Protected Virtualization (PV):

Protected Virtualization guarantees, that non-shared memory of a  guest
that operates in PV mode private to that guest. I.e. any attempts by the
hypervisor or other guests to access it will result in an exception. If
supported by the environment (machine, KVM, guest VM) a guest can decide
to change into PV mode by doing the appropriate ultravisor calls.

* Ultravisor:

A hardware/firmware entity that manages PV guests, and polices access to
their memory. A PV guest prospect needs to interact with the ultravisor,
to enter PV mode, and potentially to share pages (for I/O which should
be encrypted by the guest). A guest interacts with the ultravisor via so
called ultravisor calls. A hypervisor needs to interact with the
ultravisor to facilitate interpretation, emulation and swapping. A
hypervisor  interacts with the ultravisor via ultravisor calls and via
the SIE state description. Generally the ultravisor sanitizes hypervisor
inputs so that the guest can not be corrupted (except for denial of
service.


What needs to be done
=====================

Thus what needs to be done to bring virtio-ccw up to speed with respect
to protected virtualization is:
* use some 'new' common virtio stuff
* make sure that virtio-ccw specific stuff uses shared memory when
  talking to the hypervisor (except control/communication blocks like ORB,
  these are handled by the ultravisor)
* make sure the DMA API does what is necessary to talk through shared
  memory if we are a protected virtualization guest.
* make sure the common IO layer plays along as well (airqs, sense).


Important notes
================

* This patch set is based on Martins features branch
 (git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git branch
 'features').

* Documentation is still very sketchy. I'm committed to improving this,
  but I'm currently hampered by some dependencies currently.  

* The existing naming in the common infrastructure (kernel internal
  interfaces) is pretty much based on the AMD SEV terminology. Thus the
  names aren't always perfect. There might be merit to changing these
  names to more abstract ones. I did not put much thought into that at
  the current stage.

* Testing: Please use iommu_platform=on for any virtio devices you are
  going to test this code with (so virtio actually uses the DMA API).

Change log
==========

v1 --> v2:
* patch "virtio/s390: use vring_create_virtqueue" went already upstream
* patch "virtio/s390: DMA support for virtio-ccw" went already upstream
* patch "virtio/s390: enable packed ring" went already upstream
* Made dev.dma_mask point to dev.coherent_dma_mask for css, subchannel
  and ccw devices.
* While rebasing 's390/airq: use DMA memory for adapter interrupts' the
  newly introduced kmem_cache  was replaced with an equivalent dma_pool;
  the kalloc() allocations are now replaced with cio_dma_zalloc()
  allocations to avoid wasting almost a full page.
* Made virtio-ccw use the new AIRQ_IV_CACHELINE flag.
* fixed all remaining checkpatch issues

RFC --> v1:
* Fixed bugs found by Connie (may_reduce and handling reduced,  warning,
  split move -- thanks Connie!).
* Fixed console bug found by Sebastian (thanks Sebastian!).
* Removed the completely useless duplicate of dma-mapping.h spotted by
  Christoph (thanks Christoph!).
* Don't use the global DMA pool for subchannel and ccw device
  owned memory as requested by Sebastian. Consequences:
	* Both subchannel and ccw devices have their dma masks
	now (both specifying 31 bit addressable)
	* We require at least 2 DMA pages per ccw device now, most of
	this memory is wasted though.
	* DMA memory allocated by virtio is also 31 bit addressable now
        as virtio uses the parent (which is the ccw device).
* Enabled packed ring.
* Rebased onto Martins feature branch; using the actual uv (ultravisor)
  interface instead of TODO comments.
* Added some explanations to the cover letter (Connie, David).
* Squashed a couple of patches together and fixed some text stuff. 

Looking forward to your review, or any other type of input.

Halil Pasic (8):
  s390/mm: force swiotlb for protected virtualization
  s390/cio: introduce DMA pools to cio
  s390/cio: add basic protected virtualization support
  s390/airq: use DMA memory for adapter interrupts
  virtio/s390: use cacheline aligned airq bit vectors
  virtio/s390: add indirection to indicators access
  virtio/s390: use DMA memory for ccw I/O and classic notifiers
  virtio/s390: make airq summary indicators DMA

 arch/s390/Kconfig                   |   5 +
 arch/s390/include/asm/airq.h        |   2 +
 arch/s390/include/asm/ccwdev.h      |   4 +
 arch/s390/include/asm/cio.h         |  11 ++
 arch/s390/include/asm/mem_encrypt.h |  18 +++
 arch/s390/mm/init.c                 |  47 +++++++
 drivers/s390/cio/airq.c             |  32 +++--
 drivers/s390/cio/ccwreq.c           |   9 +-
 drivers/s390/cio/cio.h              |   2 +
 drivers/s390/cio/css.c              | 111 ++++++++++++++++
 drivers/s390/cio/device.c           |  64 ++++++++--
 drivers/s390/cio/device_fsm.c       |  53 +++++---
 drivers/s390/cio/device_id.c        |  20 +--
 drivers/s390/cio/device_ops.c       |  21 +++-
 drivers/s390/cio/device_pgid.c      |  22 ++--
 drivers/s390/cio/device_status.c    |  24 ++--
 drivers/s390/cio/io_sch.h           |  20 ++-
 drivers/s390/virtio/virtio_ccw.c    | 244 ++++++++++++++++++++----------------
 18 files changed, 514 insertions(+), 195 deletions(-)
 create mode 100644 arch/s390/include/asm/mem_encrypt.h

-- 
2.13.4


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/8] s390/mm: force swiotlb for protected virtualization
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-23 16:22 ` [PATCH v2 2/8] s390/cio: introduce DMA pools to cio Michael Mueller
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

On s390, protected virtualization guests have to use bounced I/O
buffers.  That requires some plumbing.

Let us make sure, any device that uses DMA API with direct ops correctly
is spared from the problems, that a hypervisor attempting I/O to a
non-shared page would bring.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/Kconfig                   |  4 ++++
 arch/s390/include/asm/mem_encrypt.h | 18 ++++++++++++++
 arch/s390/mm/init.c                 | 47 +++++++++++++++++++++++++++++++++++++
 3 files changed, 69 insertions(+)
 create mode 100644 arch/s390/include/asm/mem_encrypt.h

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 109243fdb6ec..88d8355b7bf7 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -1,4 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
+config ARCH_HAS_MEM_ENCRYPT
+        def_bool y
+
 config MMU
 	def_bool y
 
@@ -187,6 +190,7 @@ config S390
 	select VIRT_CPU_ACCOUNTING
 	select ARCH_HAS_SCALED_CPUTIME
 	select HAVE_NMI
+	select SWIOTLB
 
 
 config SCHED_OMIT_FRAME_POINTER
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h
new file mode 100644
index 000000000000..0898c09a888c
--- /dev/null
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef S390_MEM_ENCRYPT_H__
+#define S390_MEM_ENCRYPT_H__
+
+#ifndef __ASSEMBLY__
+
+#define sme_me_mask	0ULL
+
+static inline bool sme_active(void) { return false; }
+extern bool sev_active(void);
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif	/* __ASSEMBLY__ */
+
+#endif	/* S390_MEM_ENCRYPT_H__ */
+
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 14d1eae9fe43..f0bee6af3960 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -18,6 +18,7 @@
 #include <linux/mman.h>
 #include <linux/mm.h>
 #include <linux/swap.h>
+#include <linux/swiotlb.h>
 #include <linux/smp.h>
 #include <linux/init.h>
 #include <linux/pagemap.h>
@@ -29,6 +30,7 @@
 #include <linux/export.h>
 #include <linux/cma.h>
 #include <linux/gfp.h>
+#include <linux/dma-mapping.h>
 #include <asm/processor.h>
 #include <linux/uaccess.h>
 #include <asm/pgtable.h>
@@ -42,6 +44,8 @@
 #include <asm/sclp.h>
 #include <asm/set_memory.h>
 #include <asm/kasan.h>
+#include <asm/dma-mapping.h>
+#include <asm/uv.h>
 
 pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(.bss..swapper_pg_dir);
 
@@ -128,6 +132,47 @@ void mark_rodata_ro(void)
 	pr_info("Write protected read-only-after-init data: %luk\n", size >> 10);
 }
 
+int set_memory_encrypted(unsigned long addr, int numpages)
+{
+	int i;
+
+	/* make specified pages unshared, (swiotlb, dma_free) */
+	for (i = 0; i < numpages; ++i) {
+		uv_remove_shared(addr);
+		addr += PAGE_SIZE;
+	}
+	return 0;
+}
+
+int set_memory_decrypted(unsigned long addr, int numpages)
+{
+	int i;
+	/* make specified pages shared (swiotlb, dma_alloca) */
+	for (i = 0; i < numpages; ++i) {
+		uv_set_shared(addr);
+		addr += PAGE_SIZE;
+	}
+	return 0;
+}
+
+/* are we a protected virtualization guest? */
+bool sev_active(void)
+{
+	return is_prot_virt_guest();
+}
+
+/* protected virtualization */
+static void pv_init(void)
+{
+	if (!is_prot_virt_guest())
+		return;
+
+	/* make sure bounce buffers are shared */
+	swiotlb_init(1);
+	swiotlb_update_mem_attributes();
+	swiotlb_force = SWIOTLB_FORCE;
+}
+
 void __init mem_init(void)
 {
 	cpumask_set_cpu(0, &init_mm.context.cpu_attach_mask);
@@ -136,6 +181,8 @@ void __init mem_init(void)
 	set_max_mapnr(max_low_pfn);
         high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
 
+	pv_init();
+
 	/* Setup guest page hinting */
 	cmma_init();
 
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
  2019-05-23 16:22 ` [PATCH v2 1/8] s390/mm: force swiotlb for " Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-25  9:22   ` Sebastian Ott
  2019-05-27  6:57   ` Cornelia Huck
  2019-05-23 16:22 ` [PATCH v2 3/8] s390/cio: add basic protected virtualization support Michael Mueller
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.

Let us introduce one global cio, and some tools for pools seated
at individual devices.

Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than PAGE_SIZE.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
---
 arch/s390/Kconfig           |   1 +
 arch/s390/include/asm/cio.h |  11 +++++
 drivers/s390/cio/css.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 122 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 88d8355b7bf7..2a245b56db8b 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -191,6 +191,7 @@ config S390
 	select ARCH_HAS_SCALED_CPUTIME
 	select HAVE_NMI
 	select SWIOTLB
+	select GENERIC_ALLOCATOR
 
 
 config SCHED_OMIT_FRAME_POINTER
diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
index 1727180e8ca1..43c007d2775a 100644
--- a/arch/s390/include/asm/cio.h
+++ b/arch/s390/include/asm/cio.h
@@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask)
 void channel_subsystem_reinit(void);
 extern void css_schedule_reprobe(void);
 
+extern void *cio_dma_zalloc(size_t size);
+extern void cio_dma_free(void *cpu_addr, size_t size);
+extern struct device *cio_get_dma_css_dev(void);
+
+struct gen_pool;
+void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
+			size_t size);
+void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size);
+void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
+struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);
+
 /* Function from drivers/s390/cio/chsc.c */
 int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
 int chsc_sstpi(void *page, void *result, size_t size);
diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
index aea502922646..789f6ecdbbcc 100644
--- a/drivers/s390/cio/css.c
+++ b/drivers/s390/cio/css.c
@@ -20,6 +20,8 @@
 #include <linux/reboot.h>
 #include <linux/suspend.h>
 #include <linux/proc_fs.h>
+#include <linux/genalloc.h>
+#include <linux/dma-mapping.h>
 #include <asm/isc.h>
 #include <asm/crw.h>
 
@@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,
 	INIT_WORK(&sch->todo_work, css_sch_todo);
 	sch->dev.release = &css_subchannel_release;
 	device_initialize(&sch->dev);
+	sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
+	sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
 	return sch;
 
 err:
@@ -899,6 +903,8 @@ static int __init setup_css(int nr)
 	dev_set_name(&css->device, "css%x", nr);
 	css->device.groups = cssdev_attr_groups;
 	css->device.release = channel_subsystem_release;
+	css->device.coherent_dma_mask = DMA_BIT_MASK(64);
+	css->device.dma_mask = &css->device.coherent_dma_mask;
 
 	mutex_init(&css->mutex);
 	css->cssid = chsc_get_cssid(nr);
@@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
 	.notifier_call = css_power_event,
 };
 
+#define POOL_INIT_PAGES 1
+static struct gen_pool *cio_dma_pool;
+/* Currently cio supports only a single css */
+#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
+
+
+struct device *cio_get_dma_css_dev(void)
+{
+	return &channel_subsystems[0]->device;
+}
+
+struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
+{
+	struct gen_pool *gp_dma;
+	void *cpu_addr;
+	dma_addr_t dma_addr;
+	int i;
+
+	gp_dma = gen_pool_create(3, -1);
+	if (!gp_dma)
+		return NULL;
+	for (i = 0; i < nr_pages; ++i) {
+		cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
+					      CIO_DMA_GFP);
+		if (!cpu_addr)
+			return gp_dma;
+		gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
+				  dma_addr, PAGE_SIZE, -1);
+	}
+	return gp_dma;
+}
+
+static void __gp_dma_free_dma(struct gen_pool *pool,
+			      struct gen_pool_chunk *chunk, void *data)
+{
+	size_t chunk_size = chunk->end_addr - chunk->start_addr + 1;
+
+	dma_free_coherent((struct device *) data, chunk_size,
+			 (void *) chunk->start_addr,
+			 (dma_addr_t) chunk->phys_addr);
+}
+
+void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev)
+{
+	if (!gp_dma)
+		return;
+	/* this is qite ugly but no better idea */
+	gen_pool_for_each_chunk(gp_dma, __gp_dma_free_dma, dma_dev);
+	gen_pool_destroy(gp_dma);
+}
+
+static void __init cio_dma_pool_init(void)
+{
+	/* No need to free up the resources: compiled in */
+	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);
+}
+
+void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
+			size_t size)
+{
+	dma_addr_t dma_addr;
+	unsigned long addr;
+	size_t chunk_size;
+
+	addr = gen_pool_alloc(gp_dma, size);
+	while (!addr) {
+		chunk_size = round_up(size, PAGE_SIZE);
+		addr = (unsigned long) dma_alloc_coherent(dma_dev,
+					 chunk_size, &dma_addr, CIO_DMA_GFP);
+		if (!addr)
+			return NULL;
+		gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
+		addr = gen_pool_alloc(gp_dma, size);
+	}
+	return (void *) addr;
+}
+
+void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)
+{
+	if (!cpu_addr)
+		return;
+	memset(cpu_addr, 0, size);
+	gen_pool_free(gp_dma, (unsigned long) cpu_addr, size);
+}
+
+/**
+ * Allocate dma memory from the css global pool. Intended for memory not
+ * specific to any single device within the css. The allocated memory
+ * is not guaranteed to be 31-bit addressable.
+ *
+ * Caution: Not suitable for early stuff like console.
+ *
+ */
+void *cio_dma_zalloc(size_t size)
+{
+	return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size);
+}
+
+void cio_dma_free(void *cpu_addr, size_t size)
+{
+	cio_gp_dma_free(cio_dma_pool, cpu_addr, size);
+}
+
 /*
  * Now that the driver core is running, we can setup our channel subsystem.
  * The struct subchannel's are created during probing.
@@ -1063,6 +1172,7 @@ static int __init css_bus_init(void)
 		unregister_reboot_notifier(&css_reboot_notifier);
 		goto out_unregister;
 	}
+	cio_dma_pool_init();
 	css_init_done = 1;
 
 	/* Enable default isc for I/O subchannels. */
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
  2019-05-23 16:22 ` [PATCH v2 1/8] s390/mm: force swiotlb for " Michael Mueller
  2019-05-23 16:22 ` [PATCH v2 2/8] s390/cio: introduce DMA pools to cio Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-25  9:44   ` Sebastian Ott
  2019-05-27 10:38   ` Cornelia Huck
  2019-05-23 16:22 ` [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts Michael Mueller
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

As virtio-ccw devices are channel devices, we need to use the dma area
for any communication with the hypervisor.

It handles neither QDIO in the common code, nor any device type specific
stuff (like channel programs constructed by the DASD driver).

An interesting side effect is that virtio structures are now going to
get allocated in 31 bit addressable storage.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
---
 arch/s390/include/asm/ccwdev.h   |  4 +++
 drivers/s390/cio/ccwreq.c        |  9 +++---
 drivers/s390/cio/device.c        | 64 +++++++++++++++++++++++++++++++++-------
 drivers/s390/cio/device_fsm.c    | 53 ++++++++++++++++++++-------------
 drivers/s390/cio/device_id.c     | 20 +++++++------
 drivers/s390/cio/device_ops.c    | 21 +++++++++++--
 drivers/s390/cio/device_pgid.c   | 22 +++++++-------
 drivers/s390/cio/device_status.c | 24 +++++++--------
 drivers/s390/cio/io_sch.h        | 20 +++++++++----
 drivers/s390/virtio/virtio_ccw.c | 10 -------
 10 files changed, 164 insertions(+), 83 deletions(-)

diff --git a/arch/s390/include/asm/ccwdev.h b/arch/s390/include/asm/ccwdev.h
index a29dd430fb40..865ce1cb86d5 100644
--- a/arch/s390/include/asm/ccwdev.h
+++ b/arch/s390/include/asm/ccwdev.h
@@ -226,6 +226,10 @@ extern int ccw_device_enable_console(struct ccw_device *);
 extern void ccw_device_wait_idle(struct ccw_device *);
 extern int ccw_device_force_console(struct ccw_device *);
 
+extern void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size);
+extern void ccw_device_dma_free(struct ccw_device *cdev,
+				void *cpu_addr, size_t size);
+
 int ccw_device_siosl(struct ccw_device *);
 
 extern void ccw_device_get_schid(struct ccw_device *, struct subchannel_id *);
diff --git a/drivers/s390/cio/ccwreq.c b/drivers/s390/cio/ccwreq.c
index 603268a33ea1..73582a0a2622 100644
--- a/drivers/s390/cio/ccwreq.c
+++ b/drivers/s390/cio/ccwreq.c
@@ -63,7 +63,7 @@ static void ccwreq_stop(struct ccw_device *cdev, int rc)
 		return;
 	req->done = 1;
 	ccw_device_set_timeout(cdev, 0);
-	memset(&cdev->private->irb, 0, sizeof(struct irb));
+	memset(&cdev->private->dma_area->irb, 0, sizeof(struct irb));
 	if (rc && rc != -ENODEV && req->drc)
 		rc = req->drc;
 	req->callback(cdev, req->data, rc);
@@ -86,7 +86,7 @@ static void ccwreq_do(struct ccw_device *cdev)
 			continue;
 		}
 		/* Perform start function. */
-		memset(&cdev->private->irb, 0, sizeof(struct irb));
+		memset(&cdev->private->dma_area->irb, 0, sizeof(struct irb));
 		rc = cio_start(sch, cp, (u8) req->mask);
 		if (rc == 0) {
 			/* I/O started successfully. */
@@ -169,7 +169,7 @@ int ccw_request_cancel(struct ccw_device *cdev)
  */
 static enum io_status ccwreq_status(struct ccw_device *cdev, struct irb *lcirb)
 {
-	struct irb *irb = &cdev->private->irb;
+	struct irb *irb = &cdev->private->dma_area->irb;
 	struct cmd_scsw *scsw = &irb->scsw.cmd;
 	enum uc_todo todo;
 
@@ -187,7 +187,8 @@ static enum io_status ccwreq_status(struct ccw_device *cdev, struct irb *lcirb)
 		CIO_TRACE_EVENT(2, "sensedata");
 		CIO_HEX_EVENT(2, &cdev->private->dev_id,
 			      sizeof(struct ccw_dev_id));
-		CIO_HEX_EVENT(2, &cdev->private->irb.ecw, SENSE_MAX_COUNT);
+		CIO_HEX_EVENT(2, &cdev->private->dma_area->irb.ecw,
+			      SENSE_MAX_COUNT);
 		/* Check for command reject. */
 		if (irb->ecw[0] & SNS0_CMD_REJECT)
 			return IO_REJECTED;
diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
index 1540229a37bb..a9ee03c20d7e 100644
--- a/drivers/s390/cio/device.c
+++ b/drivers/s390/cio/device.c
@@ -24,6 +24,7 @@
 #include <linux/timer.h>
 #include <linux/kernel_stat.h>
 #include <linux/sched/signal.h>
+#include <linux/dma-mapping.h>
 
 #include <asm/ccwdev.h>
 #include <asm/cio.h>
@@ -687,6 +688,9 @@ ccw_device_release(struct device *dev)
 	struct ccw_device *cdev;
 
 	cdev = to_ccwdev(dev);
+	cio_gp_dma_free(cdev->private->dma_pool, cdev->private->dma_area,
+			sizeof(*cdev->private->dma_area));
+	cio_gp_dma_destroy(cdev->private->dma_pool, &cdev->dev);
 	/* Release reference of parent subchannel. */
 	put_device(cdev->dev.parent);
 	kfree(cdev->private);
@@ -696,15 +700,30 @@ ccw_device_release(struct device *dev)
 static struct ccw_device * io_subchannel_allocate_dev(struct subchannel *sch)
 {
 	struct ccw_device *cdev;
+	struct gen_pool *dma_pool;
 
 	cdev  = kzalloc(sizeof(*cdev), GFP_KERNEL);
-	if (cdev) {
-		cdev->private = kzalloc(sizeof(struct ccw_device_private),
-					GFP_KERNEL | GFP_DMA);
-		if (cdev->private)
-			return cdev;
-	}
+	if (!cdev)
+		goto err_cdev;
+	cdev->private = kzalloc(sizeof(struct ccw_device_private),
+				GFP_KERNEL | GFP_DMA);
+	if (!cdev->private)
+		goto err_priv;
+	cdev->dev.coherent_dma_mask = sch->dev.coherent_dma_mask;
+	cdev->dev.dma_mask = &cdev->dev.coherent_dma_mask;
+	dma_pool = cio_gp_dma_create(&cdev->dev, 1);
+	cdev->private->dma_pool = dma_pool;
+	cdev->private->dma_area = cio_gp_dma_zalloc(dma_pool, &cdev->dev,
+					sizeof(*cdev->private->dma_area));
+	if (!cdev->private->dma_area)
+		goto err_dma_area;
+	return cdev;
+err_dma_area:
+	cio_gp_dma_destroy(dma_pool, &cdev->dev);
+	kfree(cdev->private);
+err_priv:
 	kfree(cdev);
+err_cdev:
 	return ERR_PTR(-ENOMEM);
 }
 
@@ -884,7 +903,7 @@ io_subchannel_recog_done(struct ccw_device *cdev)
 			wake_up(&ccw_device_init_wq);
 		break;
 	case DEV_STATE_OFFLINE:
-		/* 
+		/*
 		 * We can't register the device in interrupt context so
 		 * we schedule a work item.
 		 */
@@ -1062,6 +1081,14 @@ static int io_subchannel_probe(struct subchannel *sch)
 	if (!io_priv)
 		goto out_schedule;
 
+	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
+				sizeof(*io_priv->dma_area),
+				&io_priv->dma_area_dma, GFP_KERNEL);
+	if (!io_priv->dma_area) {
+		kfree(io_priv);
+		goto out_schedule;
+	}
+
 	set_io_private(sch, io_priv);
 	css_schedule_eval(sch->schid);
 	return 0;
@@ -1088,6 +1115,8 @@ static int io_subchannel_remove(struct subchannel *sch)
 	set_io_private(sch, NULL);
 	spin_unlock_irq(sch->lock);
 out_free:
+	dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
+			  io_priv->dma_area, io_priv->dma_area_dma);
 	kfree(io_priv);
 	sysfs_remove_group(&sch->dev.kobj, &io_subchannel_attr_group);
 	return 0;
@@ -1593,20 +1622,31 @@ struct ccw_device * __init ccw_device_create_console(struct ccw_driver *drv)
 		return ERR_CAST(sch);
 
 	io_priv = kzalloc(sizeof(*io_priv), GFP_KERNEL | GFP_DMA);
-	if (!io_priv) {
-		put_device(&sch->dev);
-		return ERR_PTR(-ENOMEM);
-	}
+	if (!io_priv)
+		goto err_priv;
+	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
+				sizeof(*io_priv->dma_area),
+				&io_priv->dma_area_dma, GFP_KERNEL);
+	if (!io_priv->dma_area)
+		goto err_dma_area;
 	set_io_private(sch, io_priv);
 	cdev = io_subchannel_create_ccwdev(sch);
 	if (IS_ERR(cdev)) {
 		put_device(&sch->dev);
+		dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
+				  io_priv->dma_area, io_priv->dma_area_dma);
 		kfree(io_priv);
 		return cdev;
 	}
 	cdev->drv = drv;
 	ccw_device_set_int_class(cdev);
 	return cdev;
+
+err_dma_area:
+		kfree(io_priv);
+err_priv:
+	put_device(&sch->dev);
+	return ERR_PTR(-ENOMEM);
 }
 
 void __init ccw_device_destroy_console(struct ccw_device *cdev)
@@ -1617,6 +1657,8 @@ void __init ccw_device_destroy_console(struct ccw_device *cdev)
 	set_io_private(sch, NULL);
 	put_device(&sch->dev);
 	put_device(&cdev->dev);
+	dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
+			  io_priv->dma_area, io_priv->dma_area_dma);
 	kfree(io_priv);
 }
 
diff --git a/drivers/s390/cio/device_fsm.c b/drivers/s390/cio/device_fsm.c
index 9169af7dbb43..96326dbc64ce 100644
--- a/drivers/s390/cio/device_fsm.c
+++ b/drivers/s390/cio/device_fsm.c
@@ -67,8 +67,10 @@ static void ccw_timeout_log(struct ccw_device *cdev)
 			       sizeof(struct tcw), 0);
 	} else {
 		printk(KERN_WARNING "cio: orb indicates command mode\n");
-		if ((void *)(addr_t)orb->cmd.cpa == &private->sense_ccw ||
-		    (void *)(addr_t)orb->cmd.cpa == cdev->private->iccws)
+		if ((void *)(addr_t)orb->cmd.cpa ==
+		    &private->dma_area->sense_ccw ||
+		    (void *)(addr_t)orb->cmd.cpa ==
+		    cdev->private->dma_area->iccws)
 			printk(KERN_WARNING "cio: last channel program "
 			       "(intern):\n");
 		else
@@ -143,18 +145,26 @@ ccw_device_cancel_halt_clear(struct ccw_device *cdev)
 void ccw_device_update_sense_data(struct ccw_device *cdev)
 {
 	memset(&cdev->id, 0, sizeof(cdev->id));
-	cdev->id.cu_type   = cdev->private->senseid.cu_type;
-	cdev->id.cu_model  = cdev->private->senseid.cu_model;
-	cdev->id.dev_type  = cdev->private->senseid.dev_type;
-	cdev->id.dev_model = cdev->private->senseid.dev_model;
+	cdev->id.cu_type   =
+		cdev->private->dma_area->senseid.cu_type;
+	cdev->id.cu_model  =
+		cdev->private->dma_area->senseid.cu_model;
+	cdev->id.dev_type  =
+		cdev->private->dma_area->senseid.dev_type;
+	cdev->id.dev_model =
+		cdev->private->dma_area->senseid.dev_model;
 }
 
 int ccw_device_test_sense_data(struct ccw_device *cdev)
 {
-	return cdev->id.cu_type == cdev->private->senseid.cu_type &&
-		cdev->id.cu_model == cdev->private->senseid.cu_model &&
-		cdev->id.dev_type == cdev->private->senseid.dev_type &&
-		cdev->id.dev_model == cdev->private->senseid.dev_model;
+	return cdev->id.cu_type ==
+		cdev->private->dma_area->senseid.cu_type &&
+		cdev->id.cu_model ==
+		cdev->private->dma_area->senseid.cu_model &&
+		cdev->id.dev_type ==
+		cdev->private->dma_area->senseid.dev_type &&
+		cdev->id.dev_model ==
+		cdev->private->dma_area->senseid.dev_model;
 }
 
 /*
@@ -342,7 +352,7 @@ ccw_device_done(struct ccw_device *cdev, int state)
 		cio_disable_subchannel(sch);
 
 	/* Reset device status. */
-	memset(&cdev->private->irb, 0, sizeof(struct irb));
+	memset(&cdev->private->dma_area->irb, 0, sizeof(struct irb));
 
 	cdev->private->state = state;
 
@@ -509,13 +519,14 @@ void ccw_device_verify_done(struct ccw_device *cdev, int err)
 		ccw_device_done(cdev, DEV_STATE_ONLINE);
 		/* Deliver fake irb to device driver, if needed. */
 		if (cdev->private->flags.fake_irb) {
-			create_fake_irb(&cdev->private->irb,
+			create_fake_irb(&cdev->private->dma_area->irb,
 					cdev->private->flags.fake_irb);
 			cdev->private->flags.fake_irb = 0;
 			if (cdev->handler)
 				cdev->handler(cdev, cdev->private->intparm,
-					      &cdev->private->irb);
-			memset(&cdev->private->irb, 0, sizeof(struct irb));
+					      &cdev->private->dma_area->irb);
+			memset(&cdev->private->dma_area->irb, 0,
+			       sizeof(struct irb));
 		}
 		ccw_device_report_path_events(cdev);
 		ccw_device_handle_broken_paths(cdev);
@@ -672,7 +683,8 @@ ccw_device_online_verify(struct ccw_device *cdev, enum dev_event dev_event)
 
 	if (scsw_actl(&sch->schib.scsw) != 0 ||
 	    (scsw_stctl(&sch->schib.scsw) & SCSW_STCTL_STATUS_PEND) ||
-	    (scsw_stctl(&cdev->private->irb.scsw) & SCSW_STCTL_STATUS_PEND)) {
+	    (scsw_stctl(&cdev->private->dma_area->irb.scsw) &
+	     SCSW_STCTL_STATUS_PEND)) {
 		/*
 		 * No final status yet or final status not yet delivered
 		 * to the device driver. Can't do path verification now,
@@ -719,7 +731,7 @@ static int ccw_device_call_handler(struct ccw_device *cdev)
 	 *  - fast notification was requested (primary status)
 	 *  - unsolicited interrupts
 	 */
-	stctl = scsw_stctl(&cdev->private->irb.scsw);
+	stctl = scsw_stctl(&cdev->private->dma_area->irb.scsw);
 	ending_status = (stctl & SCSW_STCTL_SEC_STATUS) ||
 		(stctl == (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND)) ||
 		(stctl == SCSW_STCTL_STATUS_PEND);
@@ -735,9 +747,9 @@ static int ccw_device_call_handler(struct ccw_device *cdev)
 
 	if (cdev->handler)
 		cdev->handler(cdev, cdev->private->intparm,
-			      &cdev->private->irb);
+			      &cdev->private->dma_area->irb);
 
-	memset(&cdev->private->irb, 0, sizeof(struct irb));
+	memset(&cdev->private->dma_area->irb, 0, sizeof(struct irb));
 	return 1;
 }
 
@@ -759,7 +771,8 @@ ccw_device_irq(struct ccw_device *cdev, enum dev_event dev_event)
 			/* Unit check but no sense data. Need basic sense. */
 			if (ccw_device_do_sense(cdev, irb) != 0)
 				goto call_handler_unsol;
-			memcpy(&cdev->private->irb, irb, sizeof(struct irb));
+			memcpy(&cdev->private->dma_area->irb, irb,
+			       sizeof(struct irb));
 			cdev->private->state = DEV_STATE_W4SENSE;
 			cdev->private->intparm = 0;
 			return;
@@ -842,7 +855,7 @@ ccw_device_w4sense(struct ccw_device *cdev, enum dev_event dev_event)
 	if (scsw_fctl(&irb->scsw) &
 	    (SCSW_FCTL_CLEAR_FUNC | SCSW_FCTL_HALT_FUNC)) {
 		cdev->private->flags.dosense = 0;
-		memset(&cdev->private->irb, 0, sizeof(struct irb));
+		memset(&cdev->private->dma_area->irb, 0, sizeof(struct irb));
 		ccw_device_accumulate_irb(cdev, irb);
 		goto call_handler;
 	}
diff --git a/drivers/s390/cio/device_id.c b/drivers/s390/cio/device_id.c
index f6df83a9dfbb..740996d0dc8c 100644
--- a/drivers/s390/cio/device_id.c
+++ b/drivers/s390/cio/device_id.c
@@ -99,7 +99,7 @@ static int diag210_to_senseid(struct senseid *senseid, struct diag210 *diag)
 static int diag210_get_dev_info(struct ccw_device *cdev)
 {
 	struct ccw_dev_id *dev_id = &cdev->private->dev_id;
-	struct senseid *senseid = &cdev->private->senseid;
+	struct senseid *senseid = &cdev->private->dma_area->senseid;
 	struct diag210 diag_data;
 	int rc;
 
@@ -134,8 +134,10 @@ static int diag210_get_dev_info(struct ccw_device *cdev)
 static void snsid_init(struct ccw_device *cdev)
 {
 	cdev->private->flags.esid = 0;
-	memset(&cdev->private->senseid, 0, sizeof(cdev->private->senseid));
-	cdev->private->senseid.cu_type = 0xffff;
+
+	memset(&cdev->private->dma_area->senseid, 0,
+	       sizeof(cdev->private->dma_area->senseid));
+	cdev->private->dma_area->senseid.cu_type = 0xffff;
 }
 
 /*
@@ -143,16 +145,16 @@ static void snsid_init(struct ccw_device *cdev)
  */
 static int snsid_check(struct ccw_device *cdev, void *data)
 {
-	struct cmd_scsw *scsw = &cdev->private->irb.scsw.cmd;
+	struct cmd_scsw *scsw = &cdev->private->dma_area->irb.scsw.cmd;
 	int len = sizeof(struct senseid) - scsw->count;
 
 	/* Check for incomplete SENSE ID data. */
 	if (len < SENSE_ID_MIN_LEN)
 		goto out_restart;
-	if (cdev->private->senseid.cu_type == 0xffff)
+	if (cdev->private->dma_area->senseid.cu_type == 0xffff)
 		goto out_restart;
 	/* Check for incompatible SENSE ID data. */
-	if (cdev->private->senseid.reserved != 0xff)
+	if (cdev->private->dma_area->senseid.reserved != 0xff)
 		return -EOPNOTSUPP;
 	/* Check for extended-identification information. */
 	if (len > SENSE_ID_BASIC_LEN)
@@ -170,7 +172,7 @@ static int snsid_check(struct ccw_device *cdev, void *data)
 static void snsid_callback(struct ccw_device *cdev, void *data, int rc)
 {
 	struct ccw_dev_id *id = &cdev->private->dev_id;
-	struct senseid *senseid = &cdev->private->senseid;
+	struct senseid *senseid = &cdev->private->dma_area->senseid;
 	int vm = 0;
 
 	if (rc && MACHINE_IS_VM) {
@@ -200,7 +202,7 @@ void ccw_device_sense_id_start(struct ccw_device *cdev)
 {
 	struct subchannel *sch = to_subchannel(cdev->dev.parent);
 	struct ccw_request *req = &cdev->private->req;
-	struct ccw1 *cp = cdev->private->iccws;
+	struct ccw1 *cp = cdev->private->dma_area->iccws;
 
 	CIO_TRACE_EVENT(4, "snsid");
 	CIO_HEX_EVENT(4, &cdev->private->dev_id, sizeof(cdev->private->dev_id));
@@ -208,7 +210,7 @@ void ccw_device_sense_id_start(struct ccw_device *cdev)
 	snsid_init(cdev);
 	/* Channel program setup. */
 	cp->cmd_code	= CCW_CMD_SENSE_ID;
-	cp->cda		= (u32) (addr_t) &cdev->private->senseid;
+	cp->cda		= (u32) (addr_t) &cdev->private->dma_area->senseid;
 	cp->count	= sizeof(struct senseid);
 	cp->flags	= CCW_FLAG_SLI;
 	/* Request setup. */
diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
index 4435ae0b3027..be4acfa9265a 100644
--- a/drivers/s390/cio/device_ops.c
+++ b/drivers/s390/cio/device_ops.c
@@ -429,8 +429,8 @@ struct ciw *ccw_device_get_ciw(struct ccw_device *cdev, __u32 ct)
 	if (cdev->private->flags.esid == 0)
 		return NULL;
 	for (ciw_cnt = 0; ciw_cnt < MAX_CIWS; ciw_cnt++)
-		if (cdev->private->senseid.ciw[ciw_cnt].ct == ct)
-			return cdev->private->senseid.ciw + ciw_cnt;
+		if (cdev->private->dma_area->senseid.ciw[ciw_cnt].ct == ct)
+			return cdev->private->dma_area->senseid.ciw + ciw_cnt;
 	return NULL;
 }
 
@@ -699,6 +699,23 @@ void ccw_device_get_schid(struct ccw_device *cdev, struct subchannel_id *schid)
 }
 EXPORT_SYMBOL_GPL(ccw_device_get_schid);
 
+/**
+ * Allocate zeroed dma coherent 31 bit addressable memory using
+ * the subchannels dma pool. Maximal size of allocation supported
+ * is PAGE_SIZE.
+ */
+void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size)
+{
+	return cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
+}
+EXPORT_SYMBOL(ccw_device_dma_zalloc);
+
+void ccw_device_dma_free(struct ccw_device *cdev, void *cpu_addr, size_t size)
+{
+	cio_gp_dma_free(cdev->private->dma_pool, cpu_addr, size);
+}
+EXPORT_SYMBOL(ccw_device_dma_free);
+
 EXPORT_SYMBOL(ccw_device_set_options_mask);
 EXPORT_SYMBOL(ccw_device_set_options);
 EXPORT_SYMBOL(ccw_device_clear_options);
diff --git a/drivers/s390/cio/device_pgid.c b/drivers/s390/cio/device_pgid.c
index d30a3babf176..767a85635a0f 100644
--- a/drivers/s390/cio/device_pgid.c
+++ b/drivers/s390/cio/device_pgid.c
@@ -57,7 +57,7 @@ static void verify_done(struct ccw_device *cdev, int rc)
 static void nop_build_cp(struct ccw_device *cdev)
 {
 	struct ccw_request *req = &cdev->private->req;
-	struct ccw1 *cp = cdev->private->iccws;
+	struct ccw1 *cp = cdev->private->dma_area->iccws;
 
 	cp->cmd_code	= CCW_CMD_NOOP;
 	cp->cda		= 0;
@@ -134,9 +134,9 @@ static void nop_callback(struct ccw_device *cdev, void *data, int rc)
 static void spid_build_cp(struct ccw_device *cdev, u8 fn)
 {
 	struct ccw_request *req = &cdev->private->req;
-	struct ccw1 *cp = cdev->private->iccws;
+	struct ccw1 *cp = cdev->private->dma_area->iccws;
 	int i = pathmask_to_pos(req->lpm);
-	struct pgid *pgid = &cdev->private->pgid[i];
+	struct pgid *pgid = &cdev->private->dma_area->pgid[i];
 
 	pgid->inf.fc	= fn;
 	cp->cmd_code	= CCW_CMD_SET_PGID;
@@ -300,7 +300,7 @@ static int pgid_cmp(struct pgid *p1, struct pgid *p2)
 static void pgid_analyze(struct ccw_device *cdev, struct pgid **p,
 			 int *mismatch, u8 *reserved, u8 *reset)
 {
-	struct pgid *pgid = &cdev->private->pgid[0];
+	struct pgid *pgid = &cdev->private->dma_area->pgid[0];
 	struct pgid *first = NULL;
 	int lpm;
 	int i;
@@ -342,7 +342,7 @@ static u8 pgid_to_donepm(struct ccw_device *cdev)
 		lpm = 0x80 >> i;
 		if ((cdev->private->pgid_valid_mask & lpm) == 0)
 			continue;
-		pgid = &cdev->private->pgid[i];
+		pgid = &cdev->private->dma_area->pgid[i];
 		if (sch->opm & lpm) {
 			if (pgid->inf.ps.state1 != SNID_STATE1_GROUPED)
 				continue;
@@ -368,7 +368,8 @@ static void pgid_fill(struct ccw_device *cdev, struct pgid *pgid)
 	int i;
 
 	for (i = 0; i < 8; i++)
-		memcpy(&cdev->private->pgid[i], pgid, sizeof(struct pgid));
+		memcpy(&cdev->private->dma_area->pgid[i], pgid,
+		       sizeof(struct pgid));
 }
 
 /*
@@ -435,12 +436,12 @@ static void snid_done(struct ccw_device *cdev, int rc)
 static void snid_build_cp(struct ccw_device *cdev)
 {
 	struct ccw_request *req = &cdev->private->req;
-	struct ccw1 *cp = cdev->private->iccws;
+	struct ccw1 *cp = cdev->private->dma_area->iccws;
 	int i = pathmask_to_pos(req->lpm);
 
 	/* Channel program setup. */
 	cp->cmd_code	= CCW_CMD_SENSE_PGID;
-	cp->cda		= (u32) (addr_t) &cdev->private->pgid[i];
+	cp->cda		= (u32) (addr_t) &cdev->private->dma_area->pgid[i];
 	cp->count	= sizeof(struct pgid);
 	cp->flags	= CCW_FLAG_SLI;
 	req->cp		= cp;
@@ -516,7 +517,8 @@ static void verify_start(struct ccw_device *cdev)
 	sch->lpm = sch->schib.pmcw.pam;
 
 	/* Initialize PGID data. */
-	memset(cdev->private->pgid, 0, sizeof(cdev->private->pgid));
+	memset(cdev->private->dma_area->pgid, 0,
+	       sizeof(cdev->private->dma_area->pgid));
 	cdev->private->pgid_valid_mask = 0;
 	cdev->private->pgid_todo_mask = sch->schib.pmcw.pam;
 	cdev->private->path_notoper_mask = 0;
@@ -626,7 +628,7 @@ struct stlck_data {
 static void stlck_build_cp(struct ccw_device *cdev, void *buf1, void *buf2)
 {
 	struct ccw_request *req = &cdev->private->req;
-	struct ccw1 *cp = cdev->private->iccws;
+	struct ccw1 *cp = cdev->private->dma_area->iccws;
 
 	cp[0].cmd_code = CCW_CMD_STLCK;
 	cp[0].cda = (u32) (addr_t) buf1;
diff --git a/drivers/s390/cio/device_status.c b/drivers/s390/cio/device_status.c
index 7d5c7892b2c4..0bd8f2642732 100644
--- a/drivers/s390/cio/device_status.c
+++ b/drivers/s390/cio/device_status.c
@@ -79,15 +79,15 @@ ccw_device_accumulate_ecw(struct ccw_device *cdev, struct irb *irb)
 	 * are condition that have to be met for the extended control
 	 * bit to have meaning. Sick.
 	 */
-	cdev->private->irb.scsw.cmd.ectl = 0;
+	cdev->private->dma_area->irb.scsw.cmd.ectl = 0;
 	if ((irb->scsw.cmd.stctl & SCSW_STCTL_ALERT_STATUS) &&
 	    !(irb->scsw.cmd.stctl & SCSW_STCTL_INTER_STATUS))
-		cdev->private->irb.scsw.cmd.ectl = irb->scsw.cmd.ectl;
+		cdev->private->dma_area->irb.scsw.cmd.ectl = irb->scsw.cmd.ectl;
 	/* Check if extended control word is valid. */
-	if (!cdev->private->irb.scsw.cmd.ectl)
+	if (!cdev->private->dma_area->irb.scsw.cmd.ectl)
 		return;
 	/* Copy concurrent sense / model dependent information. */
-	memcpy (&cdev->private->irb.ecw, irb->ecw, sizeof (irb->ecw));
+	memcpy(&cdev->private->dma_area->irb.ecw, irb->ecw, sizeof(irb->ecw));
 }
 
 /*
@@ -118,7 +118,7 @@ ccw_device_accumulate_esw(struct ccw_device *cdev, struct irb *irb)
 	if (!ccw_device_accumulate_esw_valid(irb))
 		return;
 
-	cdev_irb = &cdev->private->irb;
+	cdev_irb = &cdev->private->dma_area->irb;
 
 	/* Copy last path used mask. */
 	cdev_irb->esw.esw1.lpum = irb->esw.esw1.lpum;
@@ -210,7 +210,7 @@ ccw_device_accumulate_irb(struct ccw_device *cdev, struct irb *irb)
 		ccw_device_path_notoper(cdev);
 	/* No irb accumulation for transport mode irbs. */
 	if (scsw_is_tm(&irb->scsw)) {
-		memcpy(&cdev->private->irb, irb, sizeof(struct irb));
+		memcpy(&cdev->private->dma_area->irb, irb, sizeof(struct irb));
 		return;
 	}
 	/*
@@ -219,7 +219,7 @@ ccw_device_accumulate_irb(struct ccw_device *cdev, struct irb *irb)
 	if (!scsw_is_solicited(&irb->scsw))
 		return;
 
-	cdev_irb = &cdev->private->irb;
+	cdev_irb = &cdev->private->dma_area->irb;
 
 	/*
 	 * If the clear function had been performed, all formerly pending
@@ -227,7 +227,7 @@ ccw_device_accumulate_irb(struct ccw_device *cdev, struct irb *irb)
 	 * intermediate accumulated status to the device driver.
 	 */
 	if (irb->scsw.cmd.fctl & SCSW_FCTL_CLEAR_FUNC)
-		memset(&cdev->private->irb, 0, sizeof(struct irb));
+		memset(&cdev->private->dma_area->irb, 0, sizeof(struct irb));
 
 	/* Copy bits which are valid only for the start function. */
 	if (irb->scsw.cmd.fctl & SCSW_FCTL_START_FUNC) {
@@ -329,9 +329,9 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb)
 	/*
 	 * We have ending status but no sense information. Do a basic sense.
 	 */
-	sense_ccw = &to_io_private(sch)->sense_ccw;
+	sense_ccw = &to_io_private(sch)->dma_area->sense_ccw;
 	sense_ccw->cmd_code = CCW_CMD_BASIC_SENSE;
-	sense_ccw->cda = (__u32) __pa(cdev->private->irb.ecw);
+	sense_ccw->cda = (__u32) __pa(cdev->private->dma_area->irb.ecw);
 	sense_ccw->count = SENSE_MAX_COUNT;
 	sense_ccw->flags = CCW_FLAG_SLI;
 
@@ -364,7 +364,7 @@ ccw_device_accumulate_basic_sense(struct ccw_device *cdev, struct irb *irb)
 
 	if (!(irb->scsw.cmd.dstat & DEV_STAT_UNIT_CHECK) &&
 	    (irb->scsw.cmd.dstat & DEV_STAT_CHN_END)) {
-		cdev->private->irb.esw.esw0.erw.cons = 1;
+		cdev->private->dma_area->irb.esw.esw0.erw.cons = 1;
 		cdev->private->flags.dosense = 0;
 	}
 	/* Check if path verification is required. */
@@ -386,7 +386,7 @@ ccw_device_accumulate_and_sense(struct ccw_device *cdev, struct irb *irb)
 	/* Check for basic sense. */
 	if (cdev->private->flags.dosense &&
 	    !(irb->scsw.cmd.dstat & DEV_STAT_UNIT_CHECK)) {
-		cdev->private->irb.esw.esw0.erw.cons = 1;
+		cdev->private->dma_area->irb.esw.esw0.erw.cons = 1;
 		cdev->private->flags.dosense = 0;
 		return 0;
 	}
diff --git a/drivers/s390/cio/io_sch.h b/drivers/s390/cio/io_sch.h
index 90e4e3a7841b..c03b4a19974e 100644
--- a/drivers/s390/cio/io_sch.h
+++ b/drivers/s390/cio/io_sch.h
@@ -9,15 +9,20 @@
 #include "css.h"
 #include "orb.h"
 
+struct io_subchannel_dma_area {
+	struct ccw1 sense_ccw;	/* static ccw for sense command */
+};
+
 struct io_subchannel_private {
 	union orb orb;		/* operation request block */
-	struct ccw1 sense_ccw;	/* static ccw for sense command */
 	struct ccw_device *cdev;/* pointer to the child ccw device */
 	struct {
 		unsigned int suspend:1;	/* allow suspend */
 		unsigned int prefetch:1;/* deny prefetch */
 		unsigned int inter:1;	/* suppress intermediate interrupts */
 	} __packed options;
+	struct io_subchannel_dma_area *dma_area;
+	dma_addr_t dma_area_dma;
 } __aligned(8);
 
 #define to_io_private(n) ((struct io_subchannel_private *) \
@@ -115,6 +120,13 @@ enum cdev_todo {
 #define FAKE_CMD_IRB	1
 #define FAKE_TM_IRB	2
 
+struct ccw_device_dma_area {
+	struct senseid senseid;	/* SenseID info */
+	struct ccw1 iccws[2];	/* ccws for SNID/SID/SPGID commands */
+	struct irb irb;		/* device status */
+	struct pgid pgid[8];	/* path group IDs per chpid*/
+};
+
 struct ccw_device_private {
 	struct ccw_device *cdev;
 	struct subchannel *sch;
@@ -156,11 +168,7 @@ struct ccw_device_private {
 	} __attribute__((packed)) flags;
 	unsigned long intparm;	/* user interruption parameter */
 	struct qdio_irq *qdio_data;
-	struct irb irb;		/* device status */
 	int async_kill_io_rc;
-	struct senseid senseid;	/* SenseID info */
-	struct pgid pgid[8];	/* path group IDs per chpid*/
-	struct ccw1 iccws[2];	/* ccws for SNID/SID/SPGID commands */
 	struct work_struct todo_work;
 	enum cdev_todo todo;
 	wait_queue_head_t wait_q;
@@ -169,6 +177,8 @@ struct ccw_device_private {
 	struct list_head cmb_list;	/* list of measured devices */
 	u64 cmb_start_time;		/* clock value of cmb reset */
 	void *cmb_wait;			/* deferred cmb enable/disable */
+	struct gen_pool *dma_pool;
+	struct ccw_device_dma_area *dma_area;
 	enum interruption_class int_class;
 };
 
diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 6a3076881321..f995798bb025 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -66,7 +66,6 @@ struct virtio_ccw_device {
 	bool device_lost;
 	unsigned int config_ready;
 	void *airq_info;
-	u64 dma_mask;
 };
 
 struct vq_info_block_legacy {
@@ -1255,16 +1254,7 @@ static int virtio_ccw_online(struct ccw_device *cdev)
 		ret = -ENOMEM;
 		goto out_free;
 	}
-
 	vcdev->vdev.dev.parent = &cdev->dev;
-	cdev->dev.dma_mask = &vcdev->dma_mask;
-	/* we are fine with common virtio infrastructure using 64 bit DMA */
-	ret = dma_set_mask_and_coherent(&cdev->dev, DMA_BIT_MASK(64));
-	if (ret) {
-		dev_warn(&cdev->dev, "Failed to enable 64-bit DMA.\n");
-		goto out_free;
-	}
-
 	vcdev->config_block = kzalloc(sizeof(*vcdev->config_block),
 				   GFP_DMA | GFP_KERNEL);
 	if (!vcdev->config_block) {
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
                   ` (2 preceding siblings ...)
  2019-05-23 16:22 ` [PATCH v2 3/8] s390/cio: add basic protected virtualization support Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-25  9:51   ` Sebastian Ott
  2019-05-27 10:53   ` Cornelia Huck
  2019-05-23 16:22 ` [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors Michael Mueller
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

Protected virtualization guests have to use shared pages for airq
notifier bit vectors, because hypervisor needs to write these bits.

Let us make sure we allocate DMA memory for the notifier bit vectors by
replacing the kmem_cache with a dma_cache and kalloc() with
cio_dma_zalloc().

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
---
 arch/s390/include/asm/airq.h |  2 ++
 drivers/s390/cio/airq.c      | 32 ++++++++++++++++++++------------
 drivers/s390/cio/cio.h       |  2 ++
 drivers/s390/cio/css.c       |  1 +
 4 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/arch/s390/include/asm/airq.h b/arch/s390/include/asm/airq.h
index c10d2ee2dfda..01936fdfaddb 100644
--- a/arch/s390/include/asm/airq.h
+++ b/arch/s390/include/asm/airq.h
@@ -11,6 +11,7 @@
 #define _ASM_S390_AIRQ_H
 
 #include <linux/bit_spinlock.h>
+#include <linux/dma-mapping.h>
 
 struct airq_struct {
 	struct hlist_node list;		/* Handler queueing. */
@@ -29,6 +30,7 @@ void unregister_adapter_interrupt(struct airq_struct *airq);
 /* Adapter interrupt bit vector */
 struct airq_iv {
 	unsigned long *vector;	/* Adapter interrupt bit vector */
+	dma_addr_t vector_dma; /* Adapter interrupt bit vector dma */
 	unsigned long *avail;	/* Allocation bit mask for the bit vector */
 	unsigned long *bitlock;	/* Lock bit mask for the bit vector */
 	unsigned long *ptr;	/* Pointer associated with each bit */
diff --git a/drivers/s390/cio/airq.c b/drivers/s390/cio/airq.c
index 4534afc63591..89d26e43004d 100644
--- a/drivers/s390/cio/airq.c
+++ b/drivers/s390/cio/airq.c
@@ -16,9 +16,11 @@
 #include <linux/mutex.h>
 #include <linux/rculist.h>
 #include <linux/slab.h>
+#include <linux/dmapool.h>
 
 #include <asm/airq.h>
 #include <asm/isc.h>
+#include <asm/cio.h>
 
 #include "cio.h"
 #include "cio_debug.h"
@@ -27,7 +29,7 @@
 static DEFINE_SPINLOCK(airq_lists_lock);
 static struct hlist_head airq_lists[MAX_ISC+1];
 
-static struct kmem_cache *airq_iv_cache;
+static struct dma_pool *airq_iv_cache;
 
 /**
  * register_adapter_interrupt() - register adapter interrupt handler
@@ -115,6 +117,11 @@ void __init init_airq_interrupts(void)
 	setup_irq(THIN_INTERRUPT, &airq_interrupt);
 }
 
+static inline unsigned long iv_size(unsigned long bits)
+{
+	return BITS_TO_LONGS(bits) * sizeof(unsigned long);
+}
+
 /**
  * airq_iv_create - create an interrupt vector
  * @bits: number of bits in the interrupt vector
@@ -132,17 +139,18 @@ struct airq_iv *airq_iv_create(unsigned long bits, unsigned long flags)
 		goto out;
 	iv->bits = bits;
 	iv->flags = flags;
-	size = BITS_TO_LONGS(bits) * sizeof(unsigned long);
+	size = iv_size(bits);
 
 	if (flags & AIRQ_IV_CACHELINE) {
 		if ((cache_line_size() * BITS_PER_BYTE) < bits)
 			goto out_free;
 
-		iv->vector = kmem_cache_zalloc(airq_iv_cache, GFP_KERNEL);
+		iv->vector = dma_pool_zalloc(airq_iv_cache, GFP_KERNEL,
+					     &iv->vector_dma);
 		if (!iv->vector)
 			goto out_free;
 	} else {
-		iv->vector = kzalloc(size, GFP_KERNEL);
+		iv->vector = cio_dma_zalloc(size);
 		if (!iv->vector)
 			goto out_free;
 	}
@@ -179,9 +187,9 @@ struct airq_iv *airq_iv_create(unsigned long bits, unsigned long flags)
 	kfree(iv->bitlock);
 	kfree(iv->avail);
 	if (iv->flags & AIRQ_IV_CACHELINE)
-		kmem_cache_free(airq_iv_cache, iv->vector);
+		dma_pool_free(airq_iv_cache, iv->vector, iv->vector_dma);
 	else
-		kfree(iv->vector);
+		cio_dma_free(iv->vector, size);
 	kfree(iv);
 out:
 	return NULL;
@@ -198,9 +206,9 @@ void airq_iv_release(struct airq_iv *iv)
 	kfree(iv->ptr);
 	kfree(iv->bitlock);
 	if (iv->flags & AIRQ_IV_CACHELINE)
-		kmem_cache_free(airq_iv_cache, iv->vector);
+		dma_pool_free(airq_iv_cache, iv->vector, iv->vector_dma);
 	else
-		kfree(iv->vector);
+		cio_dma_free(iv->vector, iv_size(iv->bits));
 	kfree(iv->avail);
 	kfree(iv);
 }
@@ -295,12 +303,12 @@ unsigned long airq_iv_scan(struct airq_iv *iv, unsigned long start,
 }
 EXPORT_SYMBOL(airq_iv_scan);
 
-static int __init airq_init(void)
+int __init airq_init(void)
 {
-	airq_iv_cache = kmem_cache_create("airq_iv_cache", cache_line_size(),
-					  cache_line_size(), 0, NULL);
+	airq_iv_cache = dma_pool_create("airq_iv_cache", cio_get_dma_css_dev(),
+					cache_line_size(),
+					cache_line_size(), PAGE_SIZE);
 	if (!airq_iv_cache)
 		return -ENOMEM;
 	return 0;
 }
-subsys_initcall(airq_init);
diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h
index 06a91743335a..4d6c7d16416e 100644
--- a/drivers/s390/cio/cio.h
+++ b/drivers/s390/cio/cio.h
@@ -135,6 +135,8 @@ extern int cio_commit_config(struct subchannel *sch);
 int cio_tm_start_key(struct subchannel *sch, struct tcw *tcw, u8 lpm, u8 key);
 int cio_tm_intrg(struct subchannel *sch);
 
+extern int __init airq_init(void);
+
 /* Use with care. */
 #ifdef CONFIG_CCW_CONSOLE
 extern struct subchannel *cio_probe_console(void);
diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
index 789f6ecdbbcc..f09521771a32 100644
--- a/drivers/s390/cio/css.c
+++ b/drivers/s390/cio/css.c
@@ -1173,6 +1173,7 @@ static int __init css_bus_init(void)
 		goto out_unregister;
 	}
 	cio_dma_pool_init();
+	airq_init();
 	css_init_done = 1;
 
 	/* Enable default isc for I/O subchannels. */
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
                   ` (3 preceding siblings ...)
  2019-05-23 16:22 ` [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-27 10:55   ` Cornelia Huck
  2019-05-23 16:22 ` [PATCH v2 6/8] virtio/s390: add indirection to indicators access Michael Mueller
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let
us use it! We actually wanted the vector to span a cacheline all along.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
---
 drivers/s390/virtio/virtio_ccw.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index f995798bb025..1da7430f94c8 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -216,7 +216,8 @@ static struct airq_info *new_airq_info(void)
 	if (!info)
 		return NULL;
 	rwlock_init(&info->lock);
-	info->aiv = airq_iv_create(VIRTIO_IV_BITS, AIRQ_IV_ALLOC | AIRQ_IV_PTR);
+	info->aiv = airq_iv_create(VIRTIO_IV_BITS, AIRQ_IV_ALLOC | AIRQ_IV_PTR
+				   | AIRQ_IV_CACHELINE);
 	if (!info->aiv) {
 		kfree(info);
 		return NULL;
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 6/8] virtio/s390: add indirection to indicators access
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
                   ` (4 preceding siblings ...)
  2019-05-23 16:22 ` [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-27 11:00   ` Cornelia Huck
  2019-05-23 16:22 ` [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers Michael Mueller
  2019-05-23 16:22 ` [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA Michael Mueller
  7 siblings, 1 reply; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

This will come in handy soon when we pull out the indicators from
virtio_ccw_device to a memory area that is shared with the hypervisor
(in particular for protected virtualization guests).

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
---
 drivers/s390/virtio/virtio_ccw.c | 40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 1da7430f94c8..e96a8cc56ec2 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -68,6 +68,16 @@ struct virtio_ccw_device {
 	void *airq_info;
 };
 
+static inline unsigned long *indicators(struct virtio_ccw_device *vcdev)
+{
+	return &vcdev->indicators;
+}
+
+static inline unsigned long *indicators2(struct virtio_ccw_device *vcdev)
+{
+	return &vcdev->indicators2;
+}
+
 struct vq_info_block_legacy {
 	__u64 queue;
 	__u32 align;
@@ -338,17 +348,17 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
 		ccw->cda = (__u32)(unsigned long) thinint_area;
 	} else {
 		/* payload is the address of the indicators */
-		indicatorp = kmalloc(sizeof(&vcdev->indicators),
+		indicatorp = kmalloc(sizeof(indicators(vcdev)),
 				     GFP_DMA | GFP_KERNEL);
 		if (!indicatorp)
 			return;
 		*indicatorp = 0;
 		ccw->cmd_code = CCW_CMD_SET_IND;
-		ccw->count = sizeof(&vcdev->indicators);
+		ccw->count = sizeof(indicators(vcdev));
 		ccw->cda = (__u32)(unsigned long) indicatorp;
 	}
 	/* Deregister indicators from host. */
-	vcdev->indicators = 0;
+	*indicators(vcdev) = 0;
 	ccw->flags = 0;
 	ret = ccw_io_helper(vcdev, ccw,
 			    vcdev->is_thinint ?
@@ -657,10 +667,10 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
 	 * We need a data area under 2G to communicate. Our payload is
 	 * the address of the indicators.
 	*/
-	indicatorp = kmalloc(sizeof(&vcdev->indicators), GFP_DMA | GFP_KERNEL);
+	indicatorp = kmalloc(sizeof(indicators(vcdev)), GFP_DMA | GFP_KERNEL);
 	if (!indicatorp)
 		goto out;
-	*indicatorp = (unsigned long) &vcdev->indicators;
+	*indicatorp = (unsigned long) indicators(vcdev);
 	if (vcdev->is_thinint) {
 		ret = virtio_ccw_register_adapter_ind(vcdev, vqs, nvqs, ccw);
 		if (ret)
@@ -669,21 +679,21 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
 	}
 	if (!vcdev->is_thinint) {
 		/* Register queue indicators with host. */
-		vcdev->indicators = 0;
+		*indicators(vcdev) = 0;
 		ccw->cmd_code = CCW_CMD_SET_IND;
 		ccw->flags = 0;
-		ccw->count = sizeof(&vcdev->indicators);
+		ccw->count = sizeof(indicators(vcdev));
 		ccw->cda = (__u32)(unsigned long) indicatorp;
 		ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_IND);
 		if (ret)
 			goto out;
 	}
 	/* Register indicators2 with host for config changes */
-	*indicatorp = (unsigned long) &vcdev->indicators2;
-	vcdev->indicators2 = 0;
+	*indicatorp = (unsigned long) indicators2(vcdev);
+	*indicators2(vcdev) = 0;
 	ccw->cmd_code = CCW_CMD_SET_CONF_IND;
 	ccw->flags = 0;
-	ccw->count = sizeof(&vcdev->indicators2);
+	ccw->count = sizeof(indicators2(vcdev));
 	ccw->cda = (__u32)(unsigned long) indicatorp;
 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_CONF_IND);
 	if (ret)
@@ -1093,17 +1103,17 @@ static void virtio_ccw_int_handler(struct ccw_device *cdev,
 			vcdev->err = -EIO;
 	}
 	virtio_ccw_check_activity(vcdev, activity);
-	for_each_set_bit(i, &vcdev->indicators,
-			 sizeof(vcdev->indicators) * BITS_PER_BYTE) {
+	for_each_set_bit(i, indicators(vcdev),
+			 sizeof(*indicators(vcdev)) * BITS_PER_BYTE) {
 		/* The bit clear must happen before the vring kick. */
-		clear_bit(i, &vcdev->indicators);
+		clear_bit(i, indicators(vcdev));
 		barrier();
 		vq = virtio_ccw_vq_by_ind(vcdev, i);
 		vring_interrupt(0, vq);
 	}
-	if (test_bit(0, &vcdev->indicators2)) {
+	if (test_bit(0, indicators2(vcdev))) {
 		virtio_config_changed(&vcdev->vdev);
-		clear_bit(0, &vcdev->indicators2);
+		clear_bit(0, indicators2(vcdev));
 	}
 }
 
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
                   ` (5 preceding siblings ...)
  2019-05-23 16:22 ` [PATCH v2 6/8] virtio/s390: add indirection to indicators access Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-27 11:49   ` Cornelia Huck
  2019-05-23 16:22 ` [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA Michael Mueller
  7 siblings, 1 reply; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

Before virtio-ccw could get away with not using DMA API for the pieces of
memory it does ccw I/O with. With protected virtualization this has to
change, since the hypervisor needs to read and sometimes also write these
pieces of memory.

The hypervisor is supposed to poke the classic notifiers, if these are
used, out of band with regards to ccw I/O. So these need to be allocated
as DMA memory (which is shared memory for protected virtualization
guests).

Let us factor out everything from struct virtio_ccw_device that needs to
be DMA memory in a satellite that is allocated as such.

Note: The control blocks of I/O instructions do not need to be shared.
These are marshalled by the ultravisor.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
---
 drivers/s390/virtio/virtio_ccw.c | 177 +++++++++++++++++++++------------------
 1 file changed, 96 insertions(+), 81 deletions(-)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index e96a8cc56ec2..03c9f7001fb1 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -46,9 +46,15 @@ struct vq_config_block {
 #define VIRTIO_CCW_CONFIG_SIZE 0x100
 /* same as PCI config space size, should be enough for all drivers */
 
+struct vcdev_dma_area {
+	unsigned long indicators;
+	unsigned long indicators2;
+	struct vq_config_block config_block;
+	__u8 status;
+};
+
 struct virtio_ccw_device {
 	struct virtio_device vdev;
-	__u8 *status;
 	__u8 config[VIRTIO_CCW_CONFIG_SIZE];
 	struct ccw_device *cdev;
 	__u32 curr_io;
@@ -58,24 +64,22 @@ struct virtio_ccw_device {
 	spinlock_t lock;
 	struct mutex io_lock; /* Serializes I/O requests */
 	struct list_head virtqueues;
-	unsigned long indicators;
-	unsigned long indicators2;
-	struct vq_config_block *config_block;
 	bool is_thinint;
 	bool going_away;
 	bool device_lost;
 	unsigned int config_ready;
 	void *airq_info;
+	struct vcdev_dma_area *dma_area;
 };
 
 static inline unsigned long *indicators(struct virtio_ccw_device *vcdev)
 {
-	return &vcdev->indicators;
+	return &vcdev->dma_area->indicators;
 }
 
 static inline unsigned long *indicators2(struct virtio_ccw_device *vcdev)
 {
-	return &vcdev->indicators2;
+	return &vcdev->dma_area->indicators2;
 }
 
 struct vq_info_block_legacy {
@@ -176,6 +180,22 @@ static struct virtio_ccw_device *to_vc_device(struct virtio_device *vdev)
 	return container_of(vdev, struct virtio_ccw_device, vdev);
 }
 
+static inline void *__vc_dma_alloc(struct virtio_device *vdev, size_t size)
+{
+	return ccw_device_dma_zalloc(to_vc_device(vdev)->cdev, size);
+}
+
+static inline void __vc_dma_free(struct virtio_device *vdev, size_t size,
+				 void *cpu_addr)
+{
+	return ccw_device_dma_free(to_vc_device(vdev)->cdev, cpu_addr, size);
+}
+
+#define vc_dma_alloc_struct(vdev, ptr) \
+	({ptr = __vc_dma_alloc(vdev, sizeof(*(ptr))); })
+#define vc_dma_free_struct(vdev, ptr) \
+	__vc_dma_free(vdev, sizeof(*(ptr)), (ptr))
+
 static void drop_airq_indicator(struct virtqueue *vq, struct airq_info *info)
 {
 	unsigned long i, flags;
@@ -336,8 +356,7 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
 	struct airq_info *airq_info = vcdev->airq_info;
 
 	if (vcdev->is_thinint) {
-		thinint_area = kzalloc(sizeof(*thinint_area),
-				       GFP_DMA | GFP_KERNEL);
+		vc_dma_alloc_struct(&vcdev->vdev, thinint_area);
 		if (!thinint_area)
 			return;
 		thinint_area->summary_indicator =
@@ -348,8 +367,8 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
 		ccw->cda = (__u32)(unsigned long) thinint_area;
 	} else {
 		/* payload is the address of the indicators */
-		indicatorp = kmalloc(sizeof(indicators(vcdev)),
-				     GFP_DMA | GFP_KERNEL);
+		indicatorp = __vc_dma_alloc(&vcdev->vdev,
+					   sizeof(indicators(vcdev)));
 		if (!indicatorp)
 			return;
 		*indicatorp = 0;
@@ -369,8 +388,9 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
 			 "Failed to deregister indicators (%d)\n", ret);
 	else if (vcdev->is_thinint)
 		virtio_ccw_drop_indicators(vcdev);
-	kfree(indicatorp);
-	kfree(thinint_area);
+	__vc_dma_free(&vcdev->vdev, sizeof(indicators(vcdev)),
+		      indicatorp);
+	vc_dma_free_struct(&vcdev->vdev, thinint_area);
 }
 
 static inline long __do_kvm_notify(struct subchannel_id schid,
@@ -417,15 +437,15 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev,
 {
 	int ret;
 
-	vcdev->config_block->index = index;
+	vcdev->dma_area->config_block.index = index;
 	ccw->cmd_code = CCW_CMD_READ_VQ_CONF;
 	ccw->flags = 0;
 	ccw->count = sizeof(struct vq_config_block);
-	ccw->cda = (__u32)(unsigned long)(vcdev->config_block);
+	ccw->cda = (__u32)(unsigned long)(&vcdev->dma_area->config_block);
 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF);
 	if (ret)
 		return ret;
-	return vcdev->config_block->num ?: -ENOENT;
+	return vcdev->dma_area->config_block.num ?: -ENOENT;
 }
 
 static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
@@ -470,7 +490,7 @@ static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
 			 ret, index);
 
 	vring_del_virtqueue(vq);
-	kfree(info->info_block);
+	vc_dma_free_struct(vq->vdev, info->info_block);
 	kfree(info);
 }
 
@@ -480,7 +500,7 @@ static void virtio_ccw_del_vqs(struct virtio_device *vdev)
 	struct ccw1 *ccw;
 	struct virtio_ccw_device *vcdev = to_vc_device(vdev);
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return;
 
@@ -489,7 +509,7 @@ static void virtio_ccw_del_vqs(struct virtio_device *vdev)
 	list_for_each_entry_safe(vq, n, &vdev->vqs, list)
 		virtio_ccw_del_vq(vq, ccw);
 
-	kfree(ccw);
+	vc_dma_free_struct(vdev, ccw);
 }
 
 static struct virtqueue *virtio_ccw_setup_vq(struct virtio_device *vdev,
@@ -512,8 +532,7 @@ static struct virtqueue *virtio_ccw_setup_vq(struct virtio_device *vdev,
 		err = -ENOMEM;
 		goto out_err;
 	}
-	info->info_block = kzalloc(sizeof(*info->info_block),
-				   GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, info->info_block);
 	if (!info->info_block) {
 		dev_warn(&vcdev->cdev->dev, "no info block\n");
 		err = -ENOMEM;
@@ -577,7 +596,7 @@ static struct virtqueue *virtio_ccw_setup_vq(struct virtio_device *vdev,
 	if (vq)
 		vring_del_virtqueue(vq);
 	if (info) {
-		kfree(info->info_block);
+		vc_dma_free_struct(vdev, info->info_block);
 	}
 	kfree(info);
 	return ERR_PTR(err);
@@ -591,7 +610,7 @@ static int virtio_ccw_register_adapter_ind(struct virtio_ccw_device *vcdev,
 	struct virtio_thinint_area *thinint_area = NULL;
 	struct airq_info *info;
 
-	thinint_area = kzalloc(sizeof(*thinint_area), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(&vcdev->vdev, thinint_area);
 	if (!thinint_area) {
 		ret = -ENOMEM;
 		goto out;
@@ -627,7 +646,7 @@ static int virtio_ccw_register_adapter_ind(struct virtio_ccw_device *vcdev,
 		virtio_ccw_drop_indicators(vcdev);
 	}
 out:
-	kfree(thinint_area);
+	vc_dma_free_struct(&vcdev->vdev, thinint_area);
 	return ret;
 }
 
@@ -643,7 +662,7 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
 	int ret, i, queue_idx = 0;
 	struct ccw1 *ccw;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return -ENOMEM;
 
@@ -667,7 +686,7 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
 	 * We need a data area under 2G to communicate. Our payload is
 	 * the address of the indicators.
 	*/
-	indicatorp = kmalloc(sizeof(indicators(vcdev)), GFP_DMA | GFP_KERNEL);
+	indicatorp = __vc_dma_alloc(&vcdev->vdev, sizeof(indicators(vcdev)));
 	if (!indicatorp)
 		goto out;
 	*indicatorp = (unsigned long) indicators(vcdev);
@@ -699,12 +718,16 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
 	if (ret)
 		goto out;
 
-	kfree(indicatorp);
-	kfree(ccw);
+	if (indicatorp)
+		__vc_dma_free(&vcdev->vdev, sizeof(indicators(vcdev)),
+			       indicatorp);
+	vc_dma_free_struct(vdev, ccw);
 	return 0;
 out:
-	kfree(indicatorp);
-	kfree(ccw);
+	if (indicatorp)
+		__vc_dma_free(&vcdev->vdev, sizeof(indicators(vcdev)),
+			       indicatorp);
+	vc_dma_free_struct(vdev, ccw);
 	virtio_ccw_del_vqs(vdev);
 	return ret;
 }
@@ -714,12 +737,12 @@ static void virtio_ccw_reset(struct virtio_device *vdev)
 	struct virtio_ccw_device *vcdev = to_vc_device(vdev);
 	struct ccw1 *ccw;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return;
 
 	/* Zero status bits. */
-	*vcdev->status = 0;
+	vcdev->dma_area->status = 0;
 
 	/* Send a reset ccw on device. */
 	ccw->cmd_code = CCW_CMD_VDEV_RESET;
@@ -727,22 +750,22 @@ static void virtio_ccw_reset(struct virtio_device *vdev)
 	ccw->count = 0;
 	ccw->cda = 0;
 	ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_RESET);
-	kfree(ccw);
+	vc_dma_free_struct(vdev, ccw);
 }
 
 static u64 virtio_ccw_get_features(struct virtio_device *vdev)
 {
 	struct virtio_ccw_device *vcdev = to_vc_device(vdev);
 	struct virtio_feature_desc *features;
+	struct ccw1 *ccw;
 	int ret;
 	u64 rc;
-	struct ccw1 *ccw;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return 0;
 
-	features = kzalloc(sizeof(*features), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, features);
 	if (!features) {
 		rc = 0;
 		goto out_free;
@@ -775,8 +798,8 @@ static u64 virtio_ccw_get_features(struct virtio_device *vdev)
 		rc |= (u64)le32_to_cpu(features->features) << 32;
 
 out_free:
-	kfree(features);
-	kfree(ccw);
+	vc_dma_free_struct(vdev, features);
+	vc_dma_free_struct(vdev, ccw);
 	return rc;
 }
 
@@ -801,11 +824,11 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev)
 		return -EINVAL;
 	}
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return -ENOMEM;
 
-	features = kzalloc(sizeof(*features), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, features);
 	if (!features) {
 		ret = -ENOMEM;
 		goto out_free;
@@ -840,8 +863,8 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev)
 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_FEAT);
 
 out_free:
-	kfree(features);
-	kfree(ccw);
+	vc_dma_free_struct(vdev, features);
+	vc_dma_free_struct(vdev, ccw);
 
 	return ret;
 }
@@ -855,11 +878,11 @@ static void virtio_ccw_get_config(struct virtio_device *vdev,
 	void *config_area;
 	unsigned long flags;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return;
 
-	config_area = kzalloc(VIRTIO_CCW_CONFIG_SIZE, GFP_DMA | GFP_KERNEL);
+	config_area = __vc_dma_alloc(vdev, VIRTIO_CCW_CONFIG_SIZE);
 	if (!config_area)
 		goto out_free;
 
@@ -881,8 +904,8 @@ static void virtio_ccw_get_config(struct virtio_device *vdev,
 		memcpy(buf, config_area + offset, len);
 
 out_free:
-	kfree(config_area);
-	kfree(ccw);
+	__vc_dma_free(vdev, VIRTIO_CCW_CONFIG_SIZE, config_area);
+	vc_dma_free_struct(vdev, ccw);
 }
 
 static void virtio_ccw_set_config(struct virtio_device *vdev,
@@ -894,11 +917,11 @@ static void virtio_ccw_set_config(struct virtio_device *vdev,
 	void *config_area;
 	unsigned long flags;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return;
 
-	config_area = kzalloc(VIRTIO_CCW_CONFIG_SIZE, GFP_DMA | GFP_KERNEL);
+	config_area = __vc_dma_alloc(vdev, VIRTIO_CCW_CONFIG_SIZE);
 	if (!config_area)
 		goto out_free;
 
@@ -917,61 +940,61 @@ static void virtio_ccw_set_config(struct virtio_device *vdev,
 	ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_CONFIG);
 
 out_free:
-	kfree(config_area);
-	kfree(ccw);
+	__vc_dma_free(vdev, VIRTIO_CCW_CONFIG_SIZE, config_area);
+	vc_dma_free_struct(vdev, ccw);
 }
 
 static u8 virtio_ccw_get_status(struct virtio_device *vdev)
 {
 	struct virtio_ccw_device *vcdev = to_vc_device(vdev);
-	u8 old_status = *vcdev->status;
+	u8 old_status = vcdev->dma_area->status;
 	struct ccw1 *ccw;
 
 	if (vcdev->revision < 1)
-		return *vcdev->status;
+		return vcdev->dma_area->status;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return old_status;
 
 	ccw->cmd_code = CCW_CMD_READ_STATUS;
 	ccw->flags = 0;
-	ccw->count = sizeof(*vcdev->status);
-	ccw->cda = (__u32)(unsigned long)vcdev->status;
+	ccw->count = sizeof(vcdev->dma_area->status);
+	ccw->cda = (__u32)(unsigned long)&vcdev->dma_area->status;
 	ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_STATUS);
 /*
  * If the channel program failed (should only happen if the device
  * was hotunplugged, and then we clean up via the machine check
- * handler anyway), vcdev->status was not overwritten and we just
+ * handler anyway), vcdev->dma_area->status was not overwritten and we just
  * return the old status, which is fine.
 */
-	kfree(ccw);
+	vc_dma_free_struct(vdev, ccw);
 
-	return *vcdev->status;
+	return vcdev->dma_area->status;
 }
 
 static void virtio_ccw_set_status(struct virtio_device *vdev, u8 status)
 {
 	struct virtio_ccw_device *vcdev = to_vc_device(vdev);
-	u8 old_status = *vcdev->status;
+	u8 old_status = vcdev->dma_area->status;
 	struct ccw1 *ccw;
 	int ret;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(vdev, ccw);
 	if (!ccw)
 		return;
 
 	/* Write the status to the host. */
-	*vcdev->status = status;
+	vcdev->dma_area->status = status;
 	ccw->cmd_code = CCW_CMD_WRITE_STATUS;
 	ccw->flags = 0;
 	ccw->count = sizeof(status);
-	ccw->cda = (__u32)(unsigned long)vcdev->status;
+	ccw->cda = (__u32)(unsigned long)&vcdev->dma_area->status;
 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_STATUS);
 	/* Write failed? We assume status is unchanged. */
 	if (ret)
-		*vcdev->status = old_status;
-	kfree(ccw);
+		vcdev->dma_area->status = old_status;
+	vc_dma_free_struct(vdev, ccw);
 }
 
 static const char *virtio_ccw_bus_name(struct virtio_device *vdev)
@@ -1004,8 +1027,7 @@ static void virtio_ccw_release_dev(struct device *_d)
 	struct virtio_device *dev = dev_to_virtio(_d);
 	struct virtio_ccw_device *vcdev = to_vc_device(dev);
 
-	kfree(vcdev->status);
-	kfree(vcdev->config_block);
+	vc_dma_free_struct(&vcdev->vdev, vcdev->dma_area);
 	kfree(vcdev);
 }
 
@@ -1213,12 +1235,12 @@ static int virtio_ccw_set_transport_rev(struct virtio_ccw_device *vcdev)
 	struct ccw1 *ccw;
 	int ret;
 
-	ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(&vcdev->vdev, ccw);
 	if (!ccw)
 		return -ENOMEM;
-	rev = kzalloc(sizeof(*rev), GFP_DMA | GFP_KERNEL);
+	vc_dma_alloc_struct(&vcdev->vdev, rev);
 	if (!rev) {
-		kfree(ccw);
+		vc_dma_free_struct(&vcdev->vdev, ccw);
 		return -ENOMEM;
 	}
 
@@ -1248,8 +1270,8 @@ static int virtio_ccw_set_transport_rev(struct virtio_ccw_device *vcdev)
 		}
 	} while (ret == -EOPNOTSUPP);
 
-	kfree(ccw);
-	kfree(rev);
+	vc_dma_free_struct(&vcdev->vdev, ccw);
+	vc_dma_free_struct(&vcdev->vdev, rev);
 	return ret;
 }
 
@@ -1266,14 +1288,9 @@ static int virtio_ccw_online(struct ccw_device *cdev)
 		goto out_free;
 	}
 	vcdev->vdev.dev.parent = &cdev->dev;
-	vcdev->config_block = kzalloc(sizeof(*vcdev->config_block),
-				   GFP_DMA | GFP_KERNEL);
-	if (!vcdev->config_block) {
-		ret = -ENOMEM;
-		goto out_free;
-	}
-	vcdev->status = kzalloc(sizeof(*vcdev->status), GFP_DMA | GFP_KERNEL);
-	if (!vcdev->status) {
+	vcdev->cdev = cdev;
+	vc_dma_alloc_struct(&vcdev->vdev, vcdev->dma_area);
+	if (!vcdev->dma_area) {
 		ret = -ENOMEM;
 		goto out_free;
 	}
@@ -1282,7 +1299,6 @@ static int virtio_ccw_online(struct ccw_device *cdev)
 
 	vcdev->vdev.dev.release = virtio_ccw_release_dev;
 	vcdev->vdev.config = &virtio_ccw_config_ops;
-	vcdev->cdev = cdev;
 	init_waitqueue_head(&vcdev->wait_q);
 	INIT_LIST_HEAD(&vcdev->virtqueues);
 	spin_lock_init(&vcdev->lock);
@@ -1313,8 +1329,7 @@ static int virtio_ccw_online(struct ccw_device *cdev)
 	return ret;
 out_free:
 	if (vcdev) {
-		kfree(vcdev->status);
-		kfree(vcdev->config_block);
+		vc_dma_free_struct(&vcdev->vdev, vcdev->dma_area);
 	}
 	kfree(vcdev);
 	return ret;
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA
  2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
                   ` (6 preceding siblings ...)
  2019-05-23 16:22 ` [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers Michael Mueller
@ 2019-05-23 16:22 ` Michael Mueller
  2019-05-27 12:00   ` Cornelia Huck
  7 siblings, 1 reply; 36+ messages in thread
From: Michael Mueller @ 2019-05-23 16:22 UTC (permalink / raw)
  To: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Sebastian Ott, Heiko Carstens
  Cc: Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel,
	Michael Mueller

From: Halil Pasic <pasic@linux.ibm.com>

Hypervisor needs to interact with the summary indicators, so these
need to be DMA memory as well (at least for protected virtualization
guests).

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
---
 drivers/s390/virtio/virtio_ccw.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 03c9f7001fb1..f666ed397dc0 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -140,11 +140,17 @@ static int virtio_ccw_use_airq = 1;
 
 struct airq_info {
 	rwlock_t lock;
-	u8 summary_indicator;
+	u8 summary_indicator_idx;
 	struct airq_struct airq;
 	struct airq_iv *aiv;
 };
 static struct airq_info *airq_areas[MAX_AIRQ_AREAS];
+static u8 *summary_indicators;
+
+static inline u8 *get_summary_indicator(struct airq_info *info)
+{
+	return summary_indicators + info->summary_indicator_idx;
+}
 
 #define CCW_CMD_SET_VQ 0x13
 #define CCW_CMD_VDEV_RESET 0x33
@@ -225,7 +231,7 @@ static void virtio_airq_handler(struct airq_struct *airq, bool floating)
 			break;
 		vring_interrupt(0, (void *)airq_iv_get_ptr(info->aiv, ai));
 	}
-	info->summary_indicator = 0;
+	*(get_summary_indicator(info)) = 0;
 	smp_wmb();
 	/* Walk through indicators field, summary indicator not active. */
 	for (ai = 0;;) {
@@ -237,7 +243,7 @@ static void virtio_airq_handler(struct airq_struct *airq, bool floating)
 	read_unlock(&info->lock);
 }
 
-static struct airq_info *new_airq_info(void)
+static struct airq_info *new_airq_info(int index)
 {
 	struct airq_info *info;
 	int rc;
@@ -253,7 +259,8 @@ static struct airq_info *new_airq_info(void)
 		return NULL;
 	}
 	info->airq.handler = virtio_airq_handler;
-	info->airq.lsi_ptr = &info->summary_indicator;
+	info->summary_indicator_idx = index;
+	info->airq.lsi_ptr = get_summary_indicator(info);
 	info->airq.lsi_mask = 0xff;
 	info->airq.isc = VIRTIO_AIRQ_ISC;
 	rc = register_adapter_interrupt(&info->airq);
@@ -275,7 +282,7 @@ static unsigned long get_airq_indicator(struct virtqueue *vqs[], int nvqs,
 
 	for (i = 0; i < MAX_AIRQ_AREAS && !indicator_addr; i++) {
 		if (!airq_areas[i])
-			airq_areas[i] = new_airq_info();
+			airq_areas[i] = new_airq_info(i);
 		info = airq_areas[i];
 		if (!info)
 			return 0;
@@ -360,7 +367,7 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
 		if (!thinint_area)
 			return;
 		thinint_area->summary_indicator =
-			(unsigned long) &airq_info->summary_indicator;
+			(unsigned long) get_summary_indicator(airq_info);
 		thinint_area->isc = VIRTIO_AIRQ_ISC;
 		ccw->cmd_code = CCW_CMD_SET_IND_ADAPTER;
 		ccw->count = sizeof(*thinint_area);
@@ -625,7 +632,7 @@ static int virtio_ccw_register_adapter_ind(struct virtio_ccw_device *vcdev,
 	}
 	info = vcdev->airq_info;
 	thinint_area->summary_indicator =
-		(unsigned long) &info->summary_indicator;
+		(unsigned long) get_summary_indicator(info);
 	thinint_area->isc = VIRTIO_AIRQ_ISC;
 	ccw->cmd_code = CCW_CMD_SET_IND_ADAPTER;
 	ccw->flags = CCW_FLAG_SLI;
@@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
 {
 	/* parse no_auto string before we do anything further */
 	no_auto_parse();
+	summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);
 	return ccw_driver_register(&virtio_ccw_driver);
 }
 device_initcall(virtio_ccw_init);
-- 
2.13.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-23 16:22 ` [PATCH v2 2/8] s390/cio: introduce DMA pools to cio Michael Mueller
@ 2019-05-25  9:22   ` Sebastian Ott
  2019-05-27 11:26     ` Michael Mueller
  2019-05-27  6:57   ` Cornelia Huck
  1 sibling, 1 reply; 36+ messages in thread
From: Sebastian Ott @ 2019-05-25  9:22 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel


On Thu, 23 May 2019, Michael Mueller wrote:
> +static void __init cio_dma_pool_init(void)
> +{
> +	/* No need to free up the resources: compiled in */
> +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);

This can return NULL.

> +/**
> + * Allocate dma memory from the css global pool. Intended for memory not
> + * specific to any single device within the css. The allocated memory
> + * is not guaranteed to be 31-bit addressable.
> + *
> + * Caution: Not suitable for early stuff like console.
> + *
> + */

drivers/s390/cio/css.c:1121: warning: Function parameter or member 'size' not described in 'cio_dma_zalloc'

Reviewed-by: Sebastian Ott <sebott@linux.ibm.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-23 16:22 ` [PATCH v2 3/8] s390/cio: add basic protected virtualization support Michael Mueller
@ 2019-05-25  9:44   ` Sebastian Ott
  2019-05-27 15:01     ` Michael Mueller
  2019-05-27 10:38   ` Cornelia Huck
  1 sibling, 1 reply; 36+ messages in thread
From: Sebastian Ott @ 2019-05-25  9:44 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel


On Thu, 23 May 2019, Michael Mueller wrote:
>  static struct ccw_device * io_subchannel_allocate_dev(struct subchannel *sch)
>  {
>  	struct ccw_device *cdev;
> +	struct gen_pool *dma_pool;
>  
>  	cdev  = kzalloc(sizeof(*cdev), GFP_KERNEL);
> -	if (cdev) {
> -		cdev->private = kzalloc(sizeof(struct ccw_device_private),
> -					GFP_KERNEL | GFP_DMA);
> -		if (cdev->private)
> -			return cdev;
> -	}
> +	if (!cdev)
> +		goto err_cdev;
> +	cdev->private = kzalloc(sizeof(struct ccw_device_private),
> +				GFP_KERNEL | GFP_DMA);
> +	if (!cdev->private)
> +		goto err_priv;
> +	cdev->dev.coherent_dma_mask = sch->dev.coherent_dma_mask;
> +	cdev->dev.dma_mask = &cdev->dev.coherent_dma_mask;
> +	dma_pool = cio_gp_dma_create(&cdev->dev, 1);

This can return NULL. gen_pool_alloc will panic in this case.
[...]

> +err_dma_area:
> +		kfree(io_priv);

Indentation.

> +err_priv:
> +	put_device(&sch->dev);
> +	return ERR_PTR(-ENOMEM);
>  }
[...]
>  void ccw_device_update_sense_data(struct ccw_device *cdev)
>  {
>  	memset(&cdev->id, 0, sizeof(cdev->id));
> -	cdev->id.cu_type   = cdev->private->senseid.cu_type;
> -	cdev->id.cu_model  = cdev->private->senseid.cu_model;
> -	cdev->id.dev_type  = cdev->private->senseid.dev_type;
> -	cdev->id.dev_model = cdev->private->senseid.dev_model;
> +	cdev->id.cu_type   =
> +		cdev->private->dma_area->senseid.cu_type;
> +	cdev->id.cu_model  =
> +		cdev->private->dma_area->senseid.cu_model;
> +	cdev->id.dev_type  =
> +		cdev->private->dma_area->senseid.dev_type;
> +	cdev->id.dev_model =
> +		cdev->private->dma_area->senseid.dev_model;

These fit into one line.

> +/**
> + * Allocate zeroed dma coherent 31 bit addressable memory using
> + * the subchannels dma pool. Maximal size of allocation supported
> + * is PAGE_SIZE.
> + */
drivers/s390/cio/device_ops.c:708: warning: Function parameter or member 'cdev' not described in 'ccw_device_dma_zalloc'
drivers/s390/cio/device_ops.c:708: warning: Function parameter or member 'size' not described in 'ccw_device_dma_zalloc'


Reviewed-by: Sebastian Ott <sebott@linux.ibm.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts
  2019-05-23 16:22 ` [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts Michael Mueller
@ 2019-05-25  9:51   ` Sebastian Ott
  2019-05-27 10:53   ` Cornelia Huck
  1 sibling, 0 replies; 36+ messages in thread
From: Sebastian Ott @ 2019-05-25  9:51 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel


On Thu, 23 May 2019, Michael Mueller wrote:
> From: Halil Pasic <pasic@linux.ibm.com>
> 
> Protected virtualization guests have to use shared pages for airq
> notifier bit vectors, because hypervisor needs to write these bits.
> 
> Let us make sure we allocate DMA memory for the notifier bit vectors by
> replacing the kmem_cache with a dma_cache and kalloc() with
> cio_dma_zalloc().
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>

Reviewed-by: Sebastian Ott <sebott@linux.ibm.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-23 16:22 ` [PATCH v2 2/8] s390/cio: introduce DMA pools to cio Michael Mueller
  2019-05-25  9:22   ` Sebastian Ott
@ 2019-05-27  6:57   ` Cornelia Huck
  2019-05-27 11:47     ` Halil Pasic
  2019-05-27 12:00     ` Michael Mueller
  1 sibling, 2 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27  6:57 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:03 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> To support protected virtualization cio will need to make sure the
> memory used for communication with the hypervisor is DMA memory.
> 
> Let us introduce one global cio, and some tools for pools seated

"one global pool for cio"?

> at individual devices.
> 
> Our DMA pools are implemented as a gen_pool backed with DMA pages. The
> idea is to avoid each allocation effectively wasting a page, as we
> typically allocate much less than PAGE_SIZE.
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> ---
>  arch/s390/Kconfig           |   1 +
>  arch/s390/include/asm/cio.h |  11 +++++
>  drivers/s390/cio/css.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 122 insertions(+)
> 

(...)

> @@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
>  	.notifier_call = css_power_event,
>  };
>  
> +#define POOL_INIT_PAGES 1
> +static struct gen_pool *cio_dma_pool;
> +/* Currently cio supports only a single css */

This comment looks misplaced.

> +#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
> +
> +
> +struct device *cio_get_dma_css_dev(void)
> +{
> +	return &channel_subsystems[0]->device;
> +}
> +
> +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
> +{
> +	struct gen_pool *gp_dma;
> +	void *cpu_addr;
> +	dma_addr_t dma_addr;
> +	int i;
> +
> +	gp_dma = gen_pool_create(3, -1);
> +	if (!gp_dma)
> +		return NULL;
> +	for (i = 0; i < nr_pages; ++i) {
> +		cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
> +					      CIO_DMA_GFP);
> +		if (!cpu_addr)
> +			return gp_dma;

So, you may return here with no memory added to the pool at all (or
less than requested), but for the caller that is indistinguishable from
an allocation that went all right. May that be a problem?

> +		gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
> +				  dma_addr, PAGE_SIZE, -1);
> +	}
> +	return gp_dma;
> +}
> +

(...)

> +static void __init cio_dma_pool_init(void)
> +{
> +	/* No need to free up the resources: compiled in */
> +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);

Does it make sense to continue if you did not get a pool here? I don't
think that should happen unless things were really bad already?

> +}
> +
> +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
> +			size_t size)
> +{
> +	dma_addr_t dma_addr;
> +	unsigned long addr;
> +	size_t chunk_size;
> +
> +	addr = gen_pool_alloc(gp_dma, size);
> +	while (!addr) {
> +		chunk_size = round_up(size, PAGE_SIZE);
> +		addr = (unsigned long) dma_alloc_coherent(dma_dev,
> +					 chunk_size, &dma_addr, CIO_DMA_GFP);
> +		if (!addr)
> +			return NULL;
> +		gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
> +		addr = gen_pool_alloc(gp_dma, size);
> +	}
> +	return (void *) addr;
> +}
> +
> +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)
> +{
> +	if (!cpu_addr)
> +		return;
> +	memset(cpu_addr, 0, size);
> +	gen_pool_free(gp_dma, (unsigned long) cpu_addr, size);
> +}
> +
> +/**
> + * Allocate dma memory from the css global pool. Intended for memory not
> + * specific to any single device within the css. The allocated memory
> + * is not guaranteed to be 31-bit addressable.
> + *
> + * Caution: Not suitable for early stuff like console.
> + *
> + */
> +void *cio_dma_zalloc(size_t size)
> +{
> +	return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size);

Ok, that looks like the failure I mentioned above should be
accommodated by the code. Still, I think it's a bit odd.

> +}

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-23 16:22 ` [PATCH v2 3/8] s390/cio: add basic protected virtualization support Michael Mueller
  2019-05-25  9:44   ` Sebastian Ott
@ 2019-05-27 10:38   ` Cornelia Huck
  2019-05-27 12:15     ` Michael Mueller
  2019-05-27 12:30     ` Halil Pasic
  1 sibling, 2 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 10:38 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:04 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> As virtio-ccw devices are channel devices, we need to use the dma area
> for any communication with the hypervisor.
> 
> It handles neither QDIO in the common code, nor any device type specific
> stuff (like channel programs constructed by the DASD driver).
> 
> An interesting side effect is that virtio structures are now going to
> get allocated in 31 bit addressable storage.
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>

[Side note: you really should add your s-o-b if you send someone else's
patches... if Halil ends up committing them, it's fine, though.]

> ---
>  arch/s390/include/asm/ccwdev.h   |  4 +++
>  drivers/s390/cio/ccwreq.c        |  9 +++---
>  drivers/s390/cio/device.c        | 64 +++++++++++++++++++++++++++++++++-------
>  drivers/s390/cio/device_fsm.c    | 53 ++++++++++++++++++++-------------
>  drivers/s390/cio/device_id.c     | 20 +++++++------
>  drivers/s390/cio/device_ops.c    | 21 +++++++++++--
>  drivers/s390/cio/device_pgid.c   | 22 +++++++-------
>  drivers/s390/cio/device_status.c | 24 +++++++--------
>  drivers/s390/cio/io_sch.h        | 20 +++++++++----
>  drivers/s390/virtio/virtio_ccw.c | 10 -------
>  10 files changed, 164 insertions(+), 83 deletions(-)
> 

(...)

> @@ -1593,20 +1622,31 @@ struct ccw_device * __init ccw_device_create_console(struct ccw_driver *drv)
>  		return ERR_CAST(sch);
>  
>  	io_priv = kzalloc(sizeof(*io_priv), GFP_KERNEL | GFP_DMA);
> -	if (!io_priv) {
> -		put_device(&sch->dev);
> -		return ERR_PTR(-ENOMEM);
> -	}
> +	if (!io_priv)
> +		goto err_priv;
> +	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
> +				sizeof(*io_priv->dma_area),
> +				&io_priv->dma_area_dma, GFP_KERNEL);

Even though we'll only end up here for 3215 or 3270 consoles, this sent
me looking.

This code is invoked via console_init(). A few lines down in
start_kernel(), we have

        /*                                                                       
         * This needs to be called before any devices perform DMA                
         * operations that might use the SWIOTLB bounce buffers. It will         
         * mark the bounce buffers as decrypted so that their usage will         
         * not cause "plain-text" data to be decrypted when accessed.            
         */
        mem_encrypt_init();

So, I'm wondering if creating the console device interacts in any way
with the memory encryption interface?

[Does basic recognition work if you start a protected virt guest with a
3270 console? I realize that the console is unlikely to work, but that
should at least exercise this code path.]

> +	if (!io_priv->dma_area)
> +		goto err_dma_area;
>  	set_io_private(sch, io_priv);
>  	cdev = io_subchannel_create_ccwdev(sch);
>  	if (IS_ERR(cdev)) {
>  		put_device(&sch->dev);
> +		dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
> +				  io_priv->dma_area, io_priv->dma_area_dma);
>  		kfree(io_priv);
>  		return cdev;
>  	}
>  	cdev->drv = drv;
>  	ccw_device_set_int_class(cdev);
>  	return cdev;
> +
> +err_dma_area:
> +		kfree(io_priv);
> +err_priv:
> +	put_device(&sch->dev);
> +	return ERR_PTR(-ENOMEM);
>  }
>  
>  void __init ccw_device_destroy_console(struct ccw_device *cdev)

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts
  2019-05-23 16:22 ` [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts Michael Mueller
  2019-05-25  9:51   ` Sebastian Ott
@ 2019-05-27 10:53   ` Cornelia Huck
  1 sibling, 0 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 10:53 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:05 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> Protected virtualization guests have to use shared pages for airq
> notifier bit vectors, because hypervisor needs to write these bits.
> 
> Let us make sure we allocate DMA memory for the notifier bit vectors by
> replacing the kmem_cache with a dma_cache and kalloc() with
> cio_dma_zalloc().
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> ---
>  arch/s390/include/asm/airq.h |  2 ++
>  drivers/s390/cio/airq.c      | 32 ++++++++++++++++++++------------
>  drivers/s390/cio/cio.h       |  2 ++
>  drivers/s390/cio/css.c       |  1 +
>  4 files changed, 25 insertions(+), 12 deletions(-)

(...)

> diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
> index 789f6ecdbbcc..f09521771a32 100644
> --- a/drivers/s390/cio/css.c
> +++ b/drivers/s390/cio/css.c
> @@ -1173,6 +1173,7 @@ static int __init css_bus_init(void)
>  		goto out_unregister;
>  	}
>  	cio_dma_pool_init();
> +	airq_init();
>  	css_init_done = 1;
>  
>  	/* Enable default isc for I/O subchannels. */

Not directly related to this patch (I don't really have any comment
here), but I started looking at the code again and now I'm wondering
about chsc. I think it came up before, but I can't remember if chsc
needed special treatment... Anyway, css_bus_init() uses some chscs
early (before cio_dma_pool_init), so we could not use the pools there,
even if we wanted to. Do chsc commands either work, or else fail
benignly on a protected virt guest?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors
  2019-05-23 16:22 ` [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors Michael Mueller
@ 2019-05-27 10:55   ` Cornelia Huck
  2019-05-27 12:03     ` Halil Pasic
  0 siblings, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 10:55 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:06 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let
> us use it! We actually wanted the vector to span a cacheline all along.
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> ---
>  drivers/s390/virtio/virtio_ccw.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
> index f995798bb025..1da7430f94c8 100644
> --- a/drivers/s390/virtio/virtio_ccw.c
> +++ b/drivers/s390/virtio/virtio_ccw.c
> @@ -216,7 +216,8 @@ static struct airq_info *new_airq_info(void)
>  	if (!info)
>  		return NULL;
>  	rwlock_init(&info->lock);
> -	info->aiv = airq_iv_create(VIRTIO_IV_BITS, AIRQ_IV_ALLOC | AIRQ_IV_PTR);
> +	info->aiv = airq_iv_create(VIRTIO_IV_BITS, AIRQ_IV_ALLOC | AIRQ_IV_PTR
> +				   | AIRQ_IV_CACHELINE);
>  	if (!info->aiv) {
>  		kfree(info);
>  		return NULL;

This patch looks to be independent of the previous patches?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 6/8] virtio/s390: add indirection to indicators access
  2019-05-23 16:22 ` [PATCH v2 6/8] virtio/s390: add indirection to indicators access Michael Mueller
@ 2019-05-27 11:00   ` Cornelia Huck
  2019-05-27 11:57     ` Halil Pasic
  0 siblings, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 11:00 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:07 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> This will come in handy soon when we pull out the indicators from
> virtio_ccw_device to a memory area that is shared with the hypervisor
> (in particular for protected virtualization guests).
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
> ---
>  drivers/s390/virtio/virtio_ccw.c | 40 +++++++++++++++++++++++++---------------
>  1 file changed, 25 insertions(+), 15 deletions(-)
> 

> @@ -338,17 +348,17 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
>  		ccw->cda = (__u32)(unsigned long) thinint_area;
>  	} else {
>  		/* payload is the address of the indicators */
> -		indicatorp = kmalloc(sizeof(&vcdev->indicators),
> +		indicatorp = kmalloc(sizeof(indicators(vcdev)),
>  				     GFP_DMA | GFP_KERNEL);
>  		if (!indicatorp)
>  			return;
>  		*indicatorp = 0;
>  		ccw->cmd_code = CCW_CMD_SET_IND;
> -		ccw->count = sizeof(&vcdev->indicators);
> +		ccw->count = sizeof(indicators(vcdev));
>  		ccw->cda = (__u32)(unsigned long) indicatorp;
>  	}
>  	/* Deregister indicators from host. */
> -	vcdev->indicators = 0;
> +	*indicators(vcdev) = 0;

I'm not too hot about this notation, but it's not wrong and a minor
thing :)

>  	ccw->flags = 0;
>  	ret = ccw_io_helper(vcdev, ccw,
>  			    vcdev->is_thinint ?

Patch looks reasonable and not dependent on the other patches here.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-25  9:22   ` Sebastian Ott
@ 2019-05-27 11:26     ` Michael Mueller
  0 siblings, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-27 11:26 UTC (permalink / raw)
  To: Sebastian Ott
  Cc: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 25.05.19 11:22, Sebastian Ott wrote:
> 
> On Thu, 23 May 2019, Michael Mueller wrote:
>> +static void __init cio_dma_pool_init(void)
>> +{
>> +	/* No need to free up the resources: compiled in */
>> +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);
> 
> This can return NULL.

css_bus_init() will fail with -ENOMEM in v3

> 
>> +/**
>> + * Allocate dma memory from the css global pool. Intended for memory not
>> + * specific to any single device within the css. The allocated memory
>> + * is not guaranteed to be 31-bit addressable.
>> + *
>> + * Caution: Not suitable for early stuff like console.
>> + *
>> + */
> 
> drivers/s390/cio/css.c:1121: warning: Function parameter or member 'size' not described in 'cio_dma_zalloc'

will complete param description in v3

> 
> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com>

Thanks!

> 

Michael


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-27  6:57   ` Cornelia Huck
@ 2019-05-27 11:47     ` Halil Pasic
  2019-05-27 12:06       ` Cornelia Huck
  2019-05-27 12:00     ` Michael Mueller
  1 sibling, 1 reply; 36+ messages in thread
From: Halil Pasic @ 2019-05-27 11:47 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 08:57:18 +0200
Cornelia Huck <cohuck@redhat.com> wrote:

> On Thu, 23 May 2019 18:22:03 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
> > From: Halil Pasic <pasic@linux.ibm.com>
> > 
> > To support protected virtualization cio will need to make sure the
> > memory used for communication with the hypervisor is DMA memory.
> > 
> > Let us introduce one global cio, and some tools for pools seated
> 
> "one global pool for cio"?
> 

Nod.

> > at individual devices.
> > 
> > Our DMA pools are implemented as a gen_pool backed with DMA pages. The
> > idea is to avoid each allocation effectively wasting a page, as we
> > typically allocate much less than PAGE_SIZE.
> > 
> > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > ---
> >  arch/s390/Kconfig           |   1 +
> >  arch/s390/include/asm/cio.h |  11 +++++
> >  drivers/s390/cio/css.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 122 insertions(+)
> > 
> 
> (...)
> 
> > @@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
> >  	.notifier_call = css_power_event,
> >  };
> >  
> > +#define POOL_INIT_PAGES 1
> > +static struct gen_pool *cio_dma_pool;
> > +/* Currently cio supports only a single css */
> 
> This comment looks misplaced.

Right! Move to ...

> 
> > +#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
> > +
> > +

... here?

> > +struct device *cio_get_dma_css_dev(void)
> > +{
> > +	return &channel_subsystems[0]->device;
> > +}
> > +
> > +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
> > +{
> > +	struct gen_pool *gp_dma;
> > +	void *cpu_addr;
> > +	dma_addr_t dma_addr;
> > +	int i;
> > +
> > +	gp_dma = gen_pool_create(3, -1);
> > +	if (!gp_dma)
> > +		return NULL;
> > +	for (i = 0; i < nr_pages; ++i) {
> > +		cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
> > +					      CIO_DMA_GFP);
> > +		if (!cpu_addr)
> > +			return gp_dma;
> 
> So, you may return here with no memory added to the pool at all (or
> less than requested), but for the caller that is indistinguishable from
> an allocation that went all right. May that be a problem?
> 

I do not think it can cause a problem: cio_gp_dma_zalloc() is going to
try to allocate the memory required and put it in the pool. If that
fails as well, we return a NULL pointer like kmalloc(). So I think we
are clean.

> > +		gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
> > +				  dma_addr, PAGE_SIZE, -1);
> > +	}
> > +	return gp_dma;
> > +}
> > +
> 
> (...)
> 
> > +static void __init cio_dma_pool_init(void)
> > +{
> > +	/* No need to free up the resources: compiled in */
> > +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);
> 
> Does it make sense to continue if you did not get a pool here? I don't
> think that should happen unless things were really bad already?
> 

I agree, this should not fail under any sane circumstances. I don't
think it makes sense to continue. Shall we simply call panic()?

> > +}
> > +
> > +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
> > +			size_t size)
> > +{
> > +	dma_addr_t dma_addr;
> > +	unsigned long addr;
> > +	size_t chunk_size;
> > +
> > +	addr = gen_pool_alloc(gp_dma, size);
> > +	while (!addr) {
> > +		chunk_size = round_up(size, PAGE_SIZE);
> > +		addr = (unsigned long) dma_alloc_coherent(dma_dev,
> > +					 chunk_size, &dma_addr, CIO_DMA_GFP);
> > +		if (!addr)
> > +			return NULL;
> > +		gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
> > +		addr = gen_pool_alloc(gp_dma, size);
> > +	}
> > +	return (void *) addr;
> > +}
> > +
> > +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)
> > +{
> > +	if (!cpu_addr)
> > +		return;
> > +	memset(cpu_addr, 0, size);
> > +	gen_pool_free(gp_dma, (unsigned long) cpu_addr, size);
> > +}
> > +
> > +/**
> > + * Allocate dma memory from the css global pool. Intended for memory not
> > + * specific to any single device within the css. The allocated memory
> > + * is not guaranteed to be 31-bit addressable.
> > + *
> > + * Caution: Not suitable for early stuff like console.
> > + *
> > + */
> > +void *cio_dma_zalloc(size_t size)
> > +{
> > +	return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size);
> 
> Ok, that looks like the failure I mentioned above should be
> accommodated by the code. Still, I think it's a bit odd.
> 

I think the behavior is reasonable: if client code wants pre-allocate n
page sized chunks we pre-allocate as may as we can. If we can't
pre-allocate all n, it ain't necessarily bad. There is no guarantee we
will hit a wall in a non-recoverable fashion.

But if you insist, I can get rid of the pre-allocation or fail create and
do a rollback if it fails.

Thanks for having a look!

Regards,
Halil

> > +}
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers
  2019-05-23 16:22 ` [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers Michael Mueller
@ 2019-05-27 11:49   ` Cornelia Huck
  0 siblings, 0 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 11:49 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:08 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> Before virtio-ccw could get away with not using DMA API for the pieces of
> memory it does ccw I/O with. With protected virtualization this has to
> change, since the hypervisor needs to read and sometimes also write these
> pieces of memory.
> 
> The hypervisor is supposed to poke the classic notifiers, if these are
> used, out of band with regards to ccw I/O. So these need to be allocated
> as DMA memory (which is shared memory for protected virtualization
> guests).
> 
> Let us factor out everything from struct virtio_ccw_device that needs to
> be DMA memory in a satellite that is allocated as such.
> 
> Note: The control blocks of I/O instructions do not need to be shared.
> These are marshalled by the ultravisor.
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
> ---
>  drivers/s390/virtio/virtio_ccw.c | 177 +++++++++++++++++++++------------------
>  1 file changed, 96 insertions(+), 81 deletions(-)

(...)

> @@ -176,6 +180,22 @@ static struct virtio_ccw_device *to_vc_device(struct virtio_device *vdev)
>  	return container_of(vdev, struct virtio_ccw_device, vdev);
>  }
>  
> +static inline void *__vc_dma_alloc(struct virtio_device *vdev, size_t size)
> +{
> +	return ccw_device_dma_zalloc(to_vc_device(vdev)->cdev, size);
> +}
> +
> +static inline void __vc_dma_free(struct virtio_device *vdev, size_t size,
> +				 void *cpu_addr)
> +{
> +	return ccw_device_dma_free(to_vc_device(vdev)->cdev, cpu_addr, size);
> +}
> +
> +#define vc_dma_alloc_struct(vdev, ptr) \
> +	({ptr = __vc_dma_alloc(vdev, sizeof(*(ptr))); })
> +#define vc_dma_free_struct(vdev, ptr) \
> +	__vc_dma_free(vdev, sizeof(*(ptr)), (ptr))

I really don't like these #defines.

> +
>  static void drop_airq_indicator(struct virtqueue *vq, struct airq_info *info)
>  {
>  	unsigned long i, flags;
> @@ -336,8 +356,7 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
>  	struct airq_info *airq_info = vcdev->airq_info;
>  
>  	if (vcdev->is_thinint) {
> -		thinint_area = kzalloc(sizeof(*thinint_area),
> -				       GFP_DMA | GFP_KERNEL);
> +		vc_dma_alloc_struct(&vcdev->vdev, thinint_area);

Any reason why this takes a detour via the virtio device? The ccw
device is already referenced in vcdev, isn't it?

thinint_area = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*thinint_area));

looks much more obvious to me.


>  		if (!thinint_area)
>  			return;
>  		thinint_area->summary_indicator =

(...)

I did not spot anything obviously broken in the patch.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 6/8] virtio/s390: add indirection to indicators access
  2019-05-27 11:00   ` Cornelia Huck
@ 2019-05-27 11:57     ` Halil Pasic
  2019-05-27 12:10       ` Cornelia Huck
  0 siblings, 1 reply; 36+ messages in thread
From: Halil Pasic @ 2019-05-27 11:57 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 13:00:28 +0200
Cornelia Huck <cohuck@redhat.com> wrote:

> On Thu, 23 May 2019 18:22:07 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
> > From: Halil Pasic <pasic@linux.ibm.com>
> > 
> > This will come in handy soon when we pull out the indicators from
> > virtio_ccw_device to a memory area that is shared with the hypervisor
> > (in particular for protected virtualization guests).
> > 
> > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
> > ---
> >  drivers/s390/virtio/virtio_ccw.c | 40 +++++++++++++++++++++++++---------------
> >  1 file changed, 25 insertions(+), 15 deletions(-)
> > 
> 
> > @@ -338,17 +348,17 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
> >  		ccw->cda = (__u32)(unsigned long) thinint_area;
> >  	} else {
> >  		/* payload is the address of the indicators */
> > -		indicatorp = kmalloc(sizeof(&vcdev->indicators),
> > +		indicatorp = kmalloc(sizeof(indicators(vcdev)),
> >  				     GFP_DMA | GFP_KERNEL);
> >  		if (!indicatorp)
> >  			return;
> >  		*indicatorp = 0;
> >  		ccw->cmd_code = CCW_CMD_SET_IND;
> > -		ccw->count = sizeof(&vcdev->indicators);
> > +		ccw->count = sizeof(indicators(vcdev));
> >  		ccw->cda = (__u32)(unsigned long) indicatorp;
> >  	}
> >  	/* Deregister indicators from host. */
> > -	vcdev->indicators = 0;
> > +	*indicators(vcdev) = 0;
> 
> I'm not too hot about this notation, but it's not wrong and a minor
> thing :)

I don't have any better ideas :/

> 
> >  	ccw->flags = 0;
> >  	ret = ccw_io_helper(vcdev, ccw,
> >  			    vcdev->is_thinint ?
> 
> Patch looks reasonable and not dependent on the other patches here.
> 

looks reasonable == r-b?

Not dependent in a sense that this patch could be made a first patch in
the series. A subsequent patch depends on it.

Regards,
Halil


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA
  2019-05-23 16:22 ` [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA Michael Mueller
@ 2019-05-27 12:00   ` Cornelia Huck
  2019-05-28 14:33     ` Halil Pasic
  0 siblings, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 12:00 UTC (permalink / raw)
  To: Michael Mueller
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel

On Thu, 23 May 2019 18:22:09 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> Hypervisor needs to interact with the summary indicators, so these
> need to be DMA memory as well (at least for protected virtualization
> guests).
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> ---
>  drivers/s390/virtio/virtio_ccw.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)

(...)

> @@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
>  {
>  	/* parse no_auto string before we do anything further */
>  	no_auto_parse();
> +	summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);

What happens if this fails?

>  	return ccw_driver_register(&virtio_ccw_driver);
>  }
>  device_initcall(virtio_ccw_init);


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-27  6:57   ` Cornelia Huck
  2019-05-27 11:47     ` Halil Pasic
@ 2019-05-27 12:00     ` Michael Mueller
  1 sibling, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-27 12:00 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 27.05.19 08:57, Cornelia Huck wrote:
> On Thu, 23 May 2019 18:22:03 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
>> From: Halil Pasic <pasic@linux.ibm.com>
>>
>> To support protected virtualization cio will need to make sure the
>> memory used for communication with the hypervisor is DMA memory.
>>
>> Let us introduce one global cio, and some tools for pools seated
> 
> "one global pool for cio"?

changed in v3

> 
>> at individual devices.
>>
>> Our DMA pools are implemented as a gen_pool backed with DMA pages. The
>> idea is to avoid each allocation effectively wasting a page, as we
>> typically allocate much less than PAGE_SIZE.
>>
>> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
>> ---
>>   arch/s390/Kconfig           |   1 +
>>   arch/s390/include/asm/cio.h |  11 +++++
>>   drivers/s390/cio/css.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++
>>   3 files changed, 122 insertions(+)
>>
> 
> (...)
> 
>> @@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
>>   	.notifier_call = css_power_event,
>>   };
>>   
>> +#define POOL_INIT_PAGES 1
>> +static struct gen_pool *cio_dma_pool;
>> +/* Currently cio supports only a single css */
> 
> This comment looks misplaced.

gone in v3

> 
>> +#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
>> +
>> +
>> +struct device *cio_get_dma_css_dev(void)
>> +{
>> +	return &channel_subsystems[0]->device;
>> +}
>> +
>> +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
>> +{
>> +	struct gen_pool *gp_dma;
>> +	void *cpu_addr;
>> +	dma_addr_t dma_addr;
>> +	int i;
>> +
>> +	gp_dma = gen_pool_create(3, -1);
>> +	if (!gp_dma)
>> +		return NULL;
>> +	for (i = 0; i < nr_pages; ++i) {
>> +		cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
>> +					      CIO_DMA_GFP);
>> +		if (!cpu_addr)
>> +			return gp_dma;
> 
> So, you may return here with no memory added to the pool at all (or
> less than requested), but for the caller that is indistinguishable from
> an allocation that went all right. May that be a problem?

Halil,

can you pls. bring some light into the intention of this part of
the code. To me this seems to be odd as well!
Currently cio_gp_dma_create() might succeed with a successful
gen_pool_create() and an initially failing dma_alloc_coherent().

> 
>> +		gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
>> +				  dma_addr, PAGE_SIZE, -1);
>> +	}
>> +	return gp_dma;
>> +}
>> +
> 
> (...)
> 
>> +static void __init cio_dma_pool_init(void)
>> +{
>> +	/* No need to free up the resources: compiled in */
>> +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);
> 
> Does it make sense to continue if you did not get a pool here? I don't
> think that should happen unless things were really bad already?

cio_gp_dma_create() will be evaluated and css_bus_init() will fail
in v3 in the NULL case.

> 
>> +}
>> +
>> +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
>> +			size_t size)
>> +{
>> +	dma_addr_t dma_addr;
>> +	unsigned long addr;
>> +	size_t chunk_size;
>> +
>> +	addr = gen_pool_alloc(gp_dma, size);
>> +	while (!addr) {
>> +		chunk_size = round_up(size, PAGE_SIZE);
>> +		addr = (unsigned long) dma_alloc_coherent(dma_dev,
>> +					 chunk_size, &dma_addr, CIO_DMA_GFP);
>> +		if (!addr)
>> +			return NULL;
>> +		gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
>> +		addr = gen_pool_alloc(gp_dma, size);
>> +	}
>> +	return (void *) addr;
>> +}
>> +
>> +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)
>> +{
>> +	if (!cpu_addr)
>> +		return;
>> +	memset(cpu_addr, 0, size);
>> +	gen_pool_free(gp_dma, (unsigned long) cpu_addr, size);
>> +}
>> +
>> +/**
>> + * Allocate dma memory from the css global pool. Intended for memory not
>> + * specific to any single device within the css. The allocated memory
>> + * is not guaranteed to be 31-bit addressable.
>> + *
>> + * Caution: Not suitable for early stuff like console.
>> + *
>> + */
>> +void *cio_dma_zalloc(size_t size)
>> +{
>> +	return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size);
> 
> Ok, that looks like the failure I mentioned above should be
> accommodated by the code. Still, I think it's a bit odd.

This code will be reached in v3 only when cio_dma_pool is *not* NULL.

> 
>> +}
> 

Michael


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors
  2019-05-27 10:55   ` Cornelia Huck
@ 2019-05-27 12:03     ` Halil Pasic
  0 siblings, 0 replies; 36+ messages in thread
From: Halil Pasic @ 2019-05-27 12:03 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 12:55:31 +0200
Cornelia Huck <cohuck@redhat.com> wrote:

> On Thu, 23 May 2019 18:22:06 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
> > From: Halil Pasic <pasic@linux.ibm.com>
> > 
> > The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let
> > us use it! We actually wanted the vector to span a cacheline all along.
> > 
> > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > ---
> >  drivers/s390/virtio/virtio_ccw.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
> > index f995798bb025..1da7430f94c8 100644
> > --- a/drivers/s390/virtio/virtio_ccw.c
> > +++ b/drivers/s390/virtio/virtio_ccw.c
> > @@ -216,7 +216,8 @@ static struct airq_info *new_airq_info(void)
> >  	if (!info)
> >  		return NULL;
> >  	rwlock_init(&info->lock);
> > -	info->aiv = airq_iv_create(VIRTIO_IV_BITS, AIRQ_IV_ALLOC | AIRQ_IV_PTR);
> > +	info->aiv = airq_iv_create(VIRTIO_IV_BITS, AIRQ_IV_ALLOC | AIRQ_IV_PTR
> > +				   | AIRQ_IV_CACHELINE);
> >  	if (!info->aiv) {
> >  		kfree(info);
> >  		return NULL;
> 
> This patch looks to be independent of the previous patches?
> 

It ain't a clear cut. It could have been a part of the series that
introduced AIRQ_IV_CACHELINE. OTOH I'm not sure it buys us anything
without patch 4. (I'm no expert on slab allocator so I can't tell much
about the probability of getting chacheline aligned memory if we ask for
a chacheline sized chunk of memory from kmalloc()).

Regards,
Halil


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
  2019-05-27 11:47     ` Halil Pasic
@ 2019-05-27 12:06       ` Cornelia Huck
  0 siblings, 0 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 12:06 UTC (permalink / raw)
  To: Halil Pasic
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 13:47:55 +0200
Halil Pasic <pasic@linux.ibm.com> wrote:

> On Mon, 27 May 2019 08:57:18 +0200
> Cornelia Huck <cohuck@redhat.com> wrote:
> 
> > On Thu, 23 May 2019 18:22:03 +0200
> > Michael Mueller <mimu@linux.ibm.com> wrote:
> >   
> > > From: Halil Pasic <pasic@linux.ibm.com>
> > > 
> > > To support protected virtualization cio will need to make sure the
> > > memory used for communication with the hypervisor is DMA memory.
> > > 
> > > Let us introduce one global cio, and some tools for pools seated  
> > 
> > "one global pool for cio"?
> >   
> 
> Nod.
> 
> > > at individual devices.
> > > 
> > > Our DMA pools are implemented as a gen_pool backed with DMA pages. The
> > > idea is to avoid each allocation effectively wasting a page, as we
> > > typically allocate much less than PAGE_SIZE.
> > > 
> > > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > > ---
> > >  arch/s390/Kconfig           |   1 +
> > >  arch/s390/include/asm/cio.h |  11 +++++
> > >  drivers/s390/cio/css.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++
> > >  3 files changed, 122 insertions(+)
> > >   
> > 
> > (...)
> >   
> > > @@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
> > >  	.notifier_call = css_power_event,
> > >  };
> > >  
> > > +#define POOL_INIT_PAGES 1
> > > +static struct gen_pool *cio_dma_pool;
> > > +/* Currently cio supports only a single css */  
> > 
> > This comment looks misplaced.  
> 
> Right! Move to ...
> 
> >   
> > > +#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
> > > +
> > > +  
> 
> ... here?

Yes :)

> 
> > > +struct device *cio_get_dma_css_dev(void)
> > > +{
> > > +	return &channel_subsystems[0]->device;
> > > +}
> > > +
> > > +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
> > > +{
> > > +	struct gen_pool *gp_dma;
> > > +	void *cpu_addr;
> > > +	dma_addr_t dma_addr;
> > > +	int i;
> > > +
> > > +	gp_dma = gen_pool_create(3, -1);
> > > +	if (!gp_dma)
> > > +		return NULL;
> > > +	for (i = 0; i < nr_pages; ++i) {
> > > +		cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
> > > +					      CIO_DMA_GFP);
> > > +		if (!cpu_addr)
> > > +			return gp_dma;  
> > 
> > So, you may return here with no memory added to the pool at all (or
> > less than requested), but for the caller that is indistinguishable from
> > an allocation that went all right. May that be a problem?
> >   
> 
> I do not think it can cause a problem: cio_gp_dma_zalloc() is going to
> try to allocate the memory required and put it in the pool. If that
> fails as well, we return a NULL pointer like kmalloc(). So I think we
> are clean.
> 
> > > +		gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
> > > +				  dma_addr, PAGE_SIZE, -1);
> > > +	}
> > > +	return gp_dma;
> > > +}
> > > +  
> > 
> > (...)
> >   
> > > +static void __init cio_dma_pool_init(void)
> > > +{
> > > +	/* No need to free up the resources: compiled in */
> > > +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);  
> > 
> > Does it make sense to continue if you did not get a pool here? I don't
> > think that should happen unless things were really bad already?
> >   
> 
> I agree, this should not fail under any sane circumstances. I don't
> think it makes sense to continue. Shall we simply call panic()?

Can we continue without the common I/O layer? Probably not. It might
really be an 'oh crap, let's panic' situation.

> 
> > > +}
> > > +
> > > +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
> > > +			size_t size)
> > > +{
> > > +	dma_addr_t dma_addr;
> > > +	unsigned long addr;
> > > +	size_t chunk_size;
> > > +
> > > +	addr = gen_pool_alloc(gp_dma, size);
> > > +	while (!addr) {
> > > +		chunk_size = round_up(size, PAGE_SIZE);
> > > +		addr = (unsigned long) dma_alloc_coherent(dma_dev,
> > > +					 chunk_size, &dma_addr, CIO_DMA_GFP);
> > > +		if (!addr)
> > > +			return NULL;
> > > +		gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
> > > +		addr = gen_pool_alloc(gp_dma, size);
> > > +	}
> > > +	return (void *) addr;
> > > +}
> > > +
> > > +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)
> > > +{
> > > +	if (!cpu_addr)
> > > +		return;
> > > +	memset(cpu_addr, 0, size);
> > > +	gen_pool_free(gp_dma, (unsigned long) cpu_addr, size);
> > > +}
> > > +
> > > +/**
> > > + * Allocate dma memory from the css global pool. Intended for memory not
> > > + * specific to any single device within the css. The allocated memory
> > > + * is not guaranteed to be 31-bit addressable.
> > > + *
> > > + * Caution: Not suitable for early stuff like console.
> > > + *
> > > + */
> > > +void *cio_dma_zalloc(size_t size)
> > > +{
> > > +	return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size);  
> > 
> > Ok, that looks like the failure I mentioned above should be
> > accommodated by the code. Still, I think it's a bit odd.
> >   
> 
> I think the behavior is reasonable: if client code wants pre-allocate n
> page sized chunks we pre-allocate as may as we can. If we can't
> pre-allocate all n, it ain't necessarily bad. There is no guarantee we
> will hit a wall in a non-recoverable fashion.

It's not necessarily broken, but there are two things that feel a bit
weird to me:
- The caller doesn't know if the requested pre-allocation worked or not.
- If we can't get memory in this early init phase, is it likely that we
  can get memory later on?

> 
> But if you insist, I can get rid of the pre-allocation or fail create and
> do a rollback if it fails.
> 
> Thanks for having a look!
> 
> Regards,
> Halil
> 
> > > +}  
> >   
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 6/8] virtio/s390: add indirection to indicators access
  2019-05-27 11:57     ` Halil Pasic
@ 2019-05-27 12:10       ` Cornelia Huck
  2019-05-29 11:05         ` Michael Mueller
  0 siblings, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 12:10 UTC (permalink / raw)
  To: Halil Pasic
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 13:57:06 +0200
Halil Pasic <pasic@linux.ibm.com> wrote:

> On Mon, 27 May 2019 13:00:28 +0200
> Cornelia Huck <cohuck@redhat.com> wrote:
> 
> > On Thu, 23 May 2019 18:22:07 +0200
> > Michael Mueller <mimu@linux.ibm.com> wrote:
> >   
> > > From: Halil Pasic <pasic@linux.ibm.com>
> > > 
> > > This will come in handy soon when we pull out the indicators from
> > > virtio_ccw_device to a memory area that is shared with the hypervisor
> > > (in particular for protected virtualization guests).
> > > 
> > > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > > Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
> > > ---
> > >  drivers/s390/virtio/virtio_ccw.c | 40 +++++++++++++++++++++++++---------------
> > >  1 file changed, 25 insertions(+), 15 deletions(-)
> > >   
> >   
> > > @@ -338,17 +348,17 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
> > >  		ccw->cda = (__u32)(unsigned long) thinint_area;
> > >  	} else {
> > >  		/* payload is the address of the indicators */
> > > -		indicatorp = kmalloc(sizeof(&vcdev->indicators),
> > > +		indicatorp = kmalloc(sizeof(indicators(vcdev)),
> > >  				     GFP_DMA | GFP_KERNEL);
> > >  		if (!indicatorp)
> > >  			return;
> > >  		*indicatorp = 0;
> > >  		ccw->cmd_code = CCW_CMD_SET_IND;
> > > -		ccw->count = sizeof(&vcdev->indicators);
> > > +		ccw->count = sizeof(indicators(vcdev));
> > >  		ccw->cda = (__u32)(unsigned long) indicatorp;
> > >  	}
> > >  	/* Deregister indicators from host. */
> > > -	vcdev->indicators = 0;
> > > +	*indicators(vcdev) = 0;  
> > 
> > I'm not too hot about this notation, but it's not wrong and a minor
> > thing :)  
> 
> I don't have any better ideas :/
> 
> >   
> > >  	ccw->flags = 0;
> > >  	ret = ccw_io_helper(vcdev, ccw,
> > >  			    vcdev->is_thinint ?  
> > 
> > Patch looks reasonable and not dependent on the other patches here.
> >   
> 
> looks reasonable == r-b?
> 
> Not dependent in a sense that this patch could be made a first patch in
> the series. A subsequent patch depends on it.

What is the plan with these patches? I can either pick patch 5+6 and
let them go through the virtio tree, or give my r-b and let them go
through the s390 tree. The former is probably the quicker route, but
the latter has less potential for dependency issues.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-27 10:38   ` Cornelia Huck
@ 2019-05-27 12:15     ` Michael Mueller
  2019-05-27 12:30     ` Halil Pasic
  1 sibling, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-27 12:15 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 27.05.19 12:38, Cornelia Huck wrote:
> On Thu, 23 May 2019 18:22:04 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
>> From: Halil Pasic <pasic@linux.ibm.com>
>>
>> As virtio-ccw devices are channel devices, we need to use the dma area
>> for any communication with the hypervisor.
>>
>> It handles neither QDIO in the common code, nor any device type specific
>> stuff (like channel programs constructed by the DASD driver).
>>
>> An interesting side effect is that virtio structures are now going to
>> get allocated in 31 bit addressable storage.
>>
>> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> 
> [Side note: you really should add your s-o-b if you send someone else's
> patches... if Halil ends up committing them, it's fine, though.]

My real problem here is that Halil is writing comments and patches after
I have prepared all my changes. ;) And now this contnues...

Michael

> 
>> ---
>>   arch/s390/include/asm/ccwdev.h   |  4 +++
>>   drivers/s390/cio/ccwreq.c        |  9 +++---
>>   drivers/s390/cio/device.c        | 64 +++++++++++++++++++++++++++++++++-------
>>   drivers/s390/cio/device_fsm.c    | 53 ++++++++++++++++++++-------------
>>   drivers/s390/cio/device_id.c     | 20 +++++++------
>>   drivers/s390/cio/device_ops.c    | 21 +++++++++++--
>>   drivers/s390/cio/device_pgid.c   | 22 +++++++-------
>>   drivers/s390/cio/device_status.c | 24 +++++++--------
>>   drivers/s390/cio/io_sch.h        | 20 +++++++++----
>>   drivers/s390/virtio/virtio_ccw.c | 10 -------
>>   10 files changed, 164 insertions(+), 83 deletions(-)
>>
> 
> (...)
> 
>> @@ -1593,20 +1622,31 @@ struct ccw_device * __init ccw_device_create_console(struct ccw_driver *drv)
>>   		return ERR_CAST(sch);
>>   
>>   	io_priv = kzalloc(sizeof(*io_priv), GFP_KERNEL | GFP_DMA);
>> -	if (!io_priv) {
>> -		put_device(&sch->dev);
>> -		return ERR_PTR(-ENOMEM);
>> -	}
>> +	if (!io_priv)
>> +		goto err_priv;
>> +	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
>> +				sizeof(*io_priv->dma_area),
>> +				&io_priv->dma_area_dma, GFP_KERNEL);
> 
> Even though we'll only end up here for 3215 or 3270 consoles, this sent
> me looking.
> 
> This code is invoked via console_init(). A few lines down in
> start_kernel(), we have
> 
>          /*
>           * This needs to be called before any devices perform DMA
>           * operations that might use the SWIOTLB bounce buffers. It will
>           * mark the bounce buffers as decrypted so that their usage will
>           * not cause "plain-text" data to be decrypted when accessed.
>           */
>          mem_encrypt_init();
> 
> So, I'm wondering if creating the console device interacts in any way
> with the memory encryption interface?
> 
> [Does basic recognition work if you start a protected virt guest with a
> 3270 console? I realize that the console is unlikely to work, but that
> should at least exercise this code path.]
> 
>> +	if (!io_priv->dma_area)
>> +		goto err_dma_area;
>>   	set_io_private(sch, io_priv);
>>   	cdev = io_subchannel_create_ccwdev(sch);
>>   	if (IS_ERR(cdev)) {
>>   		put_device(&sch->dev);
>> +		dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
>> +				  io_priv->dma_area, io_priv->dma_area_dma);
>>   		kfree(io_priv);
>>   		return cdev;
>>   	}
>>   	cdev->drv = drv;
>>   	ccw_device_set_int_class(cdev);
>>   	return cdev;
>> +
>> +err_dma_area:
>> +		kfree(io_priv);
>> +err_priv:
>> +	put_device(&sch->dev);
>> +	return ERR_PTR(-ENOMEM);
>>   }
>>   
>>   void __init ccw_device_destroy_console(struct ccw_device *cdev)
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-27 10:38   ` Cornelia Huck
  2019-05-27 12:15     ` Michael Mueller
@ 2019-05-27 12:30     ` Halil Pasic
  2019-05-27 13:31       ` Cornelia Huck
  1 sibling, 1 reply; 36+ messages in thread
From: Halil Pasic @ 2019-05-27 12:30 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 12:38:02 +0200
Cornelia Huck <cohuck@redhat.com> wrote:

> On Thu, 23 May 2019 18:22:04 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
> > From: Halil Pasic <pasic@linux.ibm.com>
> > 
> > As virtio-ccw devices are channel devices, we need to use the dma area
> > for any communication with the hypervisor.
> > 
> > It handles neither QDIO in the common code, nor any device type specific
> > stuff (like channel programs constructed by the DASD driver).
> > 
> > An interesting side effect is that virtio structures are now going to
> > get allocated in 31 bit addressable storage.
> > 
> > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> 
> [Side note: you really should add your s-o-b if you send someone else's
> patches... if Halil ends up committing them, it's fine, though.]
> 
> > ---
> >  arch/s390/include/asm/ccwdev.h   |  4 +++
> >  drivers/s390/cio/ccwreq.c        |  9 +++---
> >  drivers/s390/cio/device.c        | 64 +++++++++++++++++++++++++++++++++-------
> >  drivers/s390/cio/device_fsm.c    | 53 ++++++++++++++++++++-------------
> >  drivers/s390/cio/device_id.c     | 20 +++++++------
> >  drivers/s390/cio/device_ops.c    | 21 +++++++++++--
> >  drivers/s390/cio/device_pgid.c   | 22 +++++++-------
> >  drivers/s390/cio/device_status.c | 24 +++++++--------
> >  drivers/s390/cio/io_sch.h        | 20 +++++++++----
> >  drivers/s390/virtio/virtio_ccw.c | 10 -------
> >  10 files changed, 164 insertions(+), 83 deletions(-)
> > 
> 
> (...)
> 
> > @@ -1593,20 +1622,31 @@ struct ccw_device * __init ccw_device_create_console(struct ccw_driver *drv)
> >  		return ERR_CAST(sch);
> >  
> >  	io_priv = kzalloc(sizeof(*io_priv), GFP_KERNEL | GFP_DMA);
> > -	if (!io_priv) {
> > -		put_device(&sch->dev);
> > -		return ERR_PTR(-ENOMEM);
> > -	}
> > +	if (!io_priv)
> > +		goto err_priv;
> > +	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
> > +				sizeof(*io_priv->dma_area),
> > +				&io_priv->dma_area_dma, GFP_KERNEL);
> 
> Even though we'll only end up here for 3215 or 3270 consoles, this sent
> me looking.
> 
> This code is invoked via console_init(). A few lines down in
> start_kernel(), we have
> 
>         /*                                                                       
>          * This needs to be called before any devices perform DMA                
>          * operations that might use the SWIOTLB bounce buffers. It will         
>          * mark the bounce buffers as decrypted so that their usage will         
>          * not cause "plain-text" data to be decrypted when accessed.            
>          */
>         mem_encrypt_init();
> 
> So, I'm wondering if creating the console device interacts in any way
> with the memory encryption interface?

I do things a bit different than x86: the SWIOTLB stuff is set up in
mem_init(). So I think we should be fine. If there is a down-side to
calling swiotlb_update_mem_attributes() earlier, honestly I'm
not sure.

> 
> [Does basic recognition work if you start a protected virt guest with a
> 3270 console? I realize that the console is unlikely to work, but that
> should at least exercise this code path.]

I've already had some thoughts along these lines and slapped  
-device x-terminal3270,chardev=char_0,devno=fe.0.000a,id=terminal_0 \
on my qemu command line. The ccw device does show up in the guest...

Device   Subchan.  DevType CU Type Use  PIM PAM POM  CHPIDs
----------------------------------------------------------------------
0.0.0000 0.0.0000  0000/00 3832/01 yes  80  80  ff   00000000 00000000 
0.0.000a 0.0.0001  0000/00 3270/00      80  80  ff   01000000 00000000 
0.0.0002 0.0.0002  0000/00 3832/09 yes  80  80  ff   00000000 00000000 
0.0.0300 0.0.0003  0000/00 3832/02 yes  80  80  ff   00000000 00000000 
0.0.0301 0.0.0004  0000/00 3832/02 yes  80  80  ff   00000000 00000000 

But I would not call it a comprehensive test...

Mimu, do we have something more elaborate with regards to this?

Regards,
Halil


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-27 12:30     ` Halil Pasic
@ 2019-05-27 13:31       ` Cornelia Huck
  2019-05-29 12:24         ` Michael Mueller
  0 siblings, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2019-05-27 13:31 UTC (permalink / raw)
  To: Halil Pasic
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 14:30:14 +0200
Halil Pasic <pasic@linux.ibm.com> wrote:

> On Mon, 27 May 2019 12:38:02 +0200
> Cornelia Huck <cohuck@redhat.com> wrote:
> 
> > On Thu, 23 May 2019 18:22:04 +0200
> > Michael Mueller <mimu@linux.ibm.com> wrote:
> >   
> > > From: Halil Pasic <pasic@linux.ibm.com>
> > > 
> > > As virtio-ccw devices are channel devices, we need to use the dma area
> > > for any communication with the hypervisor.
> > > 
> > > It handles neither QDIO in the common code, nor any device type specific
> > > stuff (like channel programs constructed by the DASD driver).
> > > 
> > > An interesting side effect is that virtio structures are now going to
> > > get allocated in 31 bit addressable storage.
> > > 
> > > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>  
> > 
> > [Side note: you really should add your s-o-b if you send someone else's
> > patches... if Halil ends up committing them, it's fine, though.]
> >   
> > > ---
> > >  arch/s390/include/asm/ccwdev.h   |  4 +++
> > >  drivers/s390/cio/ccwreq.c        |  9 +++---
> > >  drivers/s390/cio/device.c        | 64 +++++++++++++++++++++++++++++++++-------
> > >  drivers/s390/cio/device_fsm.c    | 53 ++++++++++++++++++++-------------
> > >  drivers/s390/cio/device_id.c     | 20 +++++++------
> > >  drivers/s390/cio/device_ops.c    | 21 +++++++++++--
> > >  drivers/s390/cio/device_pgid.c   | 22 +++++++-------
> > >  drivers/s390/cio/device_status.c | 24 +++++++--------
> > >  drivers/s390/cio/io_sch.h        | 20 +++++++++----
> > >  drivers/s390/virtio/virtio_ccw.c | 10 -------
> > >  10 files changed, 164 insertions(+), 83 deletions(-)
> > >   
> > 
> > (...)
> >   
> > > @@ -1593,20 +1622,31 @@ struct ccw_device * __init ccw_device_create_console(struct ccw_driver *drv)
> > >  		return ERR_CAST(sch);
> > >  
> > >  	io_priv = kzalloc(sizeof(*io_priv), GFP_KERNEL | GFP_DMA);
> > > -	if (!io_priv) {
> > > -		put_device(&sch->dev);
> > > -		return ERR_PTR(-ENOMEM);
> > > -	}
> > > +	if (!io_priv)
> > > +		goto err_priv;
> > > +	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
> > > +				sizeof(*io_priv->dma_area),
> > > +				&io_priv->dma_area_dma, GFP_KERNEL);  
> > 
> > Even though we'll only end up here for 3215 or 3270 consoles, this sent
> > me looking.
> > 
> > This code is invoked via console_init(). A few lines down in
> > start_kernel(), we have
> > 
> >         /*                                                                       
> >          * This needs to be called before any devices perform DMA                
> >          * operations that might use the SWIOTLB bounce buffers. It will         
> >          * mark the bounce buffers as decrypted so that their usage will         
> >          * not cause "plain-text" data to be decrypted when accessed.            
> >          */
> >         mem_encrypt_init();
> > 
> > So, I'm wondering if creating the console device interacts in any way
> > with the memory encryption interface?  
> 
> I do things a bit different than x86: the SWIOTLB stuff is set up in
> mem_init(). So I think we should be fine. If there is a down-side to
> calling swiotlb_update_mem_attributes() earlier, honestly I'm
> not sure.

Neither am I; do any of the folks who looked at the swiotlb patch have
an idea?

> 
> > 
> > [Does basic recognition work if you start a protected virt guest with a
> > 3270 console? I realize that the console is unlikely to work, but that
> > should at least exercise this code path.]  
> 
> I've already had some thoughts along these lines and slapped  
> -device x-terminal3270,chardev=char_0,devno=fe.0.000a,id=terminal_0 \
> on my qemu command line. The ccw device does show up in the guest...
> 
> Device   Subchan.  DevType CU Type Use  PIM PAM POM  CHPIDs
> ----------------------------------------------------------------------
> 0.0.0000 0.0.0000  0000/00 3832/01 yes  80  80  ff   00000000 00000000 
> 0.0.000a 0.0.0001  0000/00 3270/00      80  80  ff   01000000 00000000 
> 0.0.0002 0.0.0002  0000/00 3832/09 yes  80  80  ff   00000000 00000000 
> 0.0.0300 0.0.0003  0000/00 3832/02 yes  80  80  ff   00000000 00000000 
> 0.0.0301 0.0.0004  0000/00 3832/02 yes  80  80  ff   00000000 00000000 
> 
> But I would not call it a comprehensive test...

If you only add the device, it will show up as a normal ccw device in
the guest; i.e. device recognition is done at the same time as for the
other ccw devices. Still good to see that nothing breaks there :)

To actually make the guest use the 3270 as its console, I guess you
need to explicitly force it (see
https://wiki.qemu.org/Features/3270#Using_3270_as_the_console)...
actually starting the console will almost certainly fail; but you can
at least check whether device recognition in the console path works.

> 
> Mimu, do we have something more elaborate with regards to this?

I don't think we need extensive testing here; just checking that the
sequence is not fundamentally broken.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-25  9:44   ` Sebastian Ott
@ 2019-05-27 15:01     ` Michael Mueller
  0 siblings, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-27 15:01 UTC (permalink / raw)
  To: Sebastian Ott
  Cc: KVM Mailing List, Linux-S390 Mailing List, Cornelia Huck,
	Heiko Carstens, Halil Pasic, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 25.05.19 11:44, Sebastian Ott wrote:
> 
> On Thu, 23 May 2019, Michael Mueller wrote:
>>   static struct ccw_device * io_subchannel_allocate_dev(struct subchannel *sch)
>>   {
>>   	struct ccw_device *cdev;
>> +	struct gen_pool *dma_pool;
>>   
>>   	cdev  = kzalloc(sizeof(*cdev), GFP_KERNEL);
>> -	if (cdev) {
>> -		cdev->private = kzalloc(sizeof(struct ccw_device_private),
>> -					GFP_KERNEL | GFP_DMA);
>> -		if (cdev->private)
>> -			return cdev;
>> -	}
>> +	if (!cdev)
>> +		goto err_cdev;
>> +	cdev->private = kzalloc(sizeof(struct ccw_device_private),
>> +				GFP_KERNEL | GFP_DMA);
>> +	if (!cdev->private)
>> +		goto err_priv;
>> +	cdev->dev.coherent_dma_mask = sch->dev.coherent_dma_mask;
>> +	cdev->dev.dma_mask = &cdev->dev.coherent_dma_mask;
>> +	dma_pool = cio_gp_dma_create(&cdev->dev, 1);
> 
> This can return NULL. gen_pool_alloc will panic in this case.
> [...]

yep, will handled in next version

> 
>> +err_dma_area:
>> +		kfree(io_priv);

one tab gone

> 
> Indentation.
> 
>> +err_priv:
>> +	put_device(&sch->dev);
>> +	return ERR_PTR(-ENOMEM);
>>   }
> [...]
>>   void ccw_device_update_sense_data(struct ccw_device *cdev)
>>   {
>>   	memset(&cdev->id, 0, sizeof(cdev->id));
>> -	cdev->id.cu_type   = cdev->private->senseid.cu_type;
>> -	cdev->id.cu_model  = cdev->private->senseid.cu_model;
>> -	cdev->id.dev_type  = cdev->private->senseid.dev_type;
>> -	cdev->id.dev_model = cdev->private->senseid.dev_model;
>> +	cdev->id.cu_type   =
>> +		cdev->private->dma_area->senseid.cu_type;
>> +	cdev->id.cu_model  =
>> +		cdev->private->dma_area->senseid.cu_model;
>> +	cdev->id.dev_type  =
>> +		cdev->private->dma_area->senseid.dev_type;
>> +	cdev->id.dev_model =
>> +		cdev->private->dma_area->senseid.dev_model;
> 
> These fit into one line.

yep, surprisingly below 80 characters

> 
>> +/**
>> + * Allocate zeroed dma coherent 31 bit addressable memory using
>> + * the subchannels dma pool. Maximal size of allocation supported
>> + * is PAGE_SIZE.
>> + */
> drivers/s390/cio/device_ops.c:708: warning: Function parameter or member 'cdev' not described in 'ccw_device_dma_zalloc'
> drivers/s390/cio/device_ops.c:708: warning: Function parameter or member 'size' not described in 'ccw_device_dma_zalloc'

changing comment open token

> 
> 
> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com>
> 

Thanks!


Michael


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA
  2019-05-27 12:00   ` Cornelia Huck
@ 2019-05-28 14:33     ` Halil Pasic
  2019-05-28 14:56       ` Cornelia Huck
  2019-05-28 14:58       ` Michael Mueller
  0 siblings, 2 replies; 36+ messages in thread
From: Halil Pasic @ 2019-05-28 14:33 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Mon, 27 May 2019 14:00:18 +0200
Cornelia Huck <cohuck@redhat.com> wrote:

> On Thu, 23 May 2019 18:22:09 +0200
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
> > From: Halil Pasic <pasic@linux.ibm.com>
> > 
> > Hypervisor needs to interact with the summary indicators, so these
> > need to be DMA memory as well (at least for protected virtualization
> > guests).
> > 
> > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > ---
> >  drivers/s390/virtio/virtio_ccw.c | 22 +++++++++++++++-------
> >  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> (...)
> 
> > @@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
> >  {
> >  	/* parse no_auto string before we do anything further */
> >  	no_auto_parse();
> > +	summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);
> 
> What happens if this fails?

Bad things could happen!

How about adding

if (!summary_indicators)
	virtio_ccw_use_airq = 0; /* fall back to classic */

?

Since it ain't very likely to happen, we could also just fail
virtio_ccw_init() with -ENOMEM.

Regards,
Halil


> 
> >  	return ccw_driver_register(&virtio_ccw_driver);
> >  }
> >  device_initcall(virtio_ccw_init);
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA
  2019-05-28 14:33     ` Halil Pasic
@ 2019-05-28 14:56       ` Cornelia Huck
  2019-05-28 14:58       ` Michael Mueller
  1 sibling, 0 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-28 14:56 UTC (permalink / raw)
  To: Halil Pasic
  Cc: Michael Mueller, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Tue, 28 May 2019 16:33:42 +0200
Halil Pasic <pasic@linux.ibm.com> wrote:

> On Mon, 27 May 2019 14:00:18 +0200
> Cornelia Huck <cohuck@redhat.com> wrote:
> 
> > On Thu, 23 May 2019 18:22:09 +0200
> > Michael Mueller <mimu@linux.ibm.com> wrote:
> >   
> > > From: Halil Pasic <pasic@linux.ibm.com>
> > > 
> > > Hypervisor needs to interact with the summary indicators, so these
> > > need to be DMA memory as well (at least for protected virtualization
> > > guests).
> > > 
> > > Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > > ---
> > >  drivers/s390/virtio/virtio_ccw.c | 22 +++++++++++++++-------
> > >  1 file changed, 15 insertions(+), 7 deletions(-)  
> > 
> > (...)
> >   
> > > @@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
> > >  {
> > >  	/* parse no_auto string before we do anything further */
> > >  	no_auto_parse();
> > > +	summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);  
> > 
> > What happens if this fails?  
> 
> Bad things could happen!
> 
> How about adding
> 
> if (!summary_indicators)
> 	virtio_ccw_use_airq = 0; /* fall back to classic */
> 
> ?
> 
> Since it ain't very likely to happen, we could also just fail
> virtio_ccw_init() with -ENOMEM.

How high are the chances of things working if we fail to allocate here?
Returning with -ENOMEM is probably the more reasonable approach here.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA
  2019-05-28 14:33     ` Halil Pasic
  2019-05-28 14:56       ` Cornelia Huck
@ 2019-05-28 14:58       ` Michael Mueller
  1 sibling, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-28 14:58 UTC (permalink / raw)
  To: Halil Pasic, Cornelia Huck
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 28.05.19 16:33, Halil Pasic wrote:
> On Mon, 27 May 2019 14:00:18 +0200
> Cornelia Huck <cohuck@redhat.com> wrote:
> 
>> On Thu, 23 May 2019 18:22:09 +0200
>> Michael Mueller <mimu@linux.ibm.com> wrote:
>>
>>> From: Halil Pasic <pasic@linux.ibm.com>
>>>
>>> Hypervisor needs to interact with the summary indicators, so these
>>> need to be DMA memory as well (at least for protected virtualization
>>> guests).
>>>
>>> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
>>> ---
>>>   drivers/s390/virtio/virtio_ccw.c | 22 +++++++++++++++-------
>>>   1 file changed, 15 insertions(+), 7 deletions(-)
>>
>> (...)
>>
>>> @@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
>>>   {
>>>   	/* parse no_auto string before we do anything further */
>>>   	no_auto_parse();
>>> +	summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);
>>
>> What happens if this fails?
> 
> Bad things could happen!
> 
> How about adding
> 
> if (!summary_indicators)
> 	virtio_ccw_use_airq = 0; /* fall back to classic */
> 
> ?
> 
> Since it ain't very likely to happen, we could also just fail
> virtio_ccw_init() with -ENOMEM.

That is what I'm currently doing in v3.

> 
> Regards,
> Halil
> 
> 
>>
>>>   	return ccw_driver_register(&virtio_ccw_driver);
>>>   }
>>>   device_initcall(virtio_ccw_init);
>>
> 

Michael


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 6/8] virtio/s390: add indirection to indicators access
  2019-05-27 12:10       ` Cornelia Huck
@ 2019-05-29 11:05         ` Michael Mueller
  0 siblings, 0 replies; 36+ messages in thread
From: Michael Mueller @ 2019-05-29 11:05 UTC (permalink / raw)
  To: Cornelia Huck, Halil Pasic
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 27.05.19 14:10, Cornelia Huck wrote:
> On Mon, 27 May 2019 13:57:06 +0200
> Halil Pasic <pasic@linux.ibm.com> wrote:
> 
>> On Mon, 27 May 2019 13:00:28 +0200
>> Cornelia Huck <cohuck@redhat.com> wrote:
>>
>>> On Thu, 23 May 2019 18:22:07 +0200
>>> Michael Mueller <mimu@linux.ibm.com> wrote:
>>>    
>>>> From: Halil Pasic <pasic@linux.ibm.com>
>>>>
>>>> This will come in handy soon when we pull out the indicators from
>>>> virtio_ccw_device to a memory area that is shared with the hypervisor
>>>> (in particular for protected virtualization guests).
>>>>
>>>> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
>>>> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
>>>> ---
>>>>   drivers/s390/virtio/virtio_ccw.c | 40 +++++++++++++++++++++++++---------------
>>>>   1 file changed, 25 insertions(+), 15 deletions(-)
>>>>    
>>>    
>>>> @@ -338,17 +348,17 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
>>>>   		ccw->cda = (__u32)(unsigned long) thinint_area;
>>>>   	} else {
>>>>   		/* payload is the address of the indicators */
>>>> -		indicatorp = kmalloc(sizeof(&vcdev->indicators),
>>>> +		indicatorp = kmalloc(sizeof(indicators(vcdev)),
>>>>   				     GFP_DMA | GFP_KERNEL);
>>>>   		if (!indicatorp)
>>>>   			return;
>>>>   		*indicatorp = 0;
>>>>   		ccw->cmd_code = CCW_CMD_SET_IND;
>>>> -		ccw->count = sizeof(&vcdev->indicators);
>>>> +		ccw->count = sizeof(indicators(vcdev));
>>>>   		ccw->cda = (__u32)(unsigned long) indicatorp;
>>>>   	}
>>>>   	/* Deregister indicators from host. */
>>>> -	vcdev->indicators = 0;
>>>> +	*indicators(vcdev) = 0;
>>>
>>> I'm not too hot about this notation, but it's not wrong and a minor
>>> thing :)
>>
>> I don't have any better ideas :/
>>
>>>    
>>>>   	ccw->flags = 0;
>>>>   	ret = ccw_io_helper(vcdev, ccw,
>>>>   			    vcdev->is_thinint ?
>>>
>>> Patch looks reasonable and not dependent on the other patches here.
>>>    
>>
>> looks reasonable == r-b?
>>
>> Not dependent in a sense that this patch could be made a first patch in
>> the series. A subsequent patch depends on it.
> 
> What is the plan with these patches? I can either pick patch 5+6 and
> let them go through the virtio tree, or give my r-b and let them go
> through the s390 tree. The former is probably the quicker route, but
> the latter has less potential for dependency issues.

please give your r-b then for these.

> 

Michael


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-27 13:31       ` Cornelia Huck
@ 2019-05-29 12:24         ` Michael Mueller
  2019-05-29 12:30           ` Cornelia Huck
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Mueller @ 2019-05-29 12:24 UTC (permalink / raw)
  To: Cornelia Huck, Halil Pasic
  Cc: KVM Mailing List, Linux-S390 Mailing List, Sebastian Ott,
	Heiko Carstens, virtualization, Michael S . Tsirkin,
	Christoph Hellwig, Thomas Huth, Christian Borntraeger,
	Viktor Mihajlovski, Vasily Gorbik, Janosch Frank,
	Claudio Imbrenda, Farhan Ali, Eric Farman, Pierre Morel



On 27.05.19 15:31, Cornelia Huck wrote:
> On Mon, 27 May 2019 14:30:14 +0200
> Halil Pasic <pasic@linux.ibm.com> wrote:
> 
>> On Mon, 27 May 2019 12:38:02 +0200
>> Cornelia Huck <cohuck@redhat.com> wrote:
>>
>>> On Thu, 23 May 2019 18:22:04 +0200
>>> Michael Mueller <mimu@linux.ibm.com> wrote:
>>>    
>>>> From: Halil Pasic <pasic@linux.ibm.com>
>>>>
>>>> As virtio-ccw devices are channel devices, we need to use the dma area
>>>> for any communication with the hypervisor.
>>>>
>>>> It handles neither QDIO in the common code, nor any device type specific
>>>> stuff (like channel programs constructed by the DASD driver).
>>>>
>>>> An interesting side effect is that virtio structures are now going to
>>>> get allocated in 31 bit addressable storage.
>>>>
>>>> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
>>>
>>> [Side note: you really should add your s-o-b if you send someone else's
>>> patches... if Halil ends up committing them, it's fine, though.]
>>>    
>>>> ---
>>>>   arch/s390/include/asm/ccwdev.h   |  4 +++
>>>>   drivers/s390/cio/ccwreq.c        |  9 +++---
>>>>   drivers/s390/cio/device.c        | 64 +++++++++++++++++++++++++++++++++-------
>>>>   drivers/s390/cio/device_fsm.c    | 53 ++++++++++++++++++++-------------
>>>>   drivers/s390/cio/device_id.c     | 20 +++++++------
>>>>   drivers/s390/cio/device_ops.c    | 21 +++++++++++--
>>>>   drivers/s390/cio/device_pgid.c   | 22 +++++++-------
>>>>   drivers/s390/cio/device_status.c | 24 +++++++--------
>>>>   drivers/s390/cio/io_sch.h        | 20 +++++++++----
>>>>   drivers/s390/virtio/virtio_ccw.c | 10 -------
>>>>   10 files changed, 164 insertions(+), 83 deletions(-)
>>>>    
>>>
>>> (...)
>>>    
>>>> @@ -1593,20 +1622,31 @@ struct ccw_device * __init ccw_device_create_console(struct ccw_driver *drv)
>>>>   		return ERR_CAST(sch);
>>>>   
>>>>   	io_priv = kzalloc(sizeof(*io_priv), GFP_KERNEL | GFP_DMA);
>>>> -	if (!io_priv) {
>>>> -		put_device(&sch->dev);
>>>> -		return ERR_PTR(-ENOMEM);
>>>> -	}
>>>> +	if (!io_priv)
>>>> +		goto err_priv;
>>>> +	io_priv->dma_area = dma_alloc_coherent(&sch->dev,
>>>> +				sizeof(*io_priv->dma_area),
>>>> +				&io_priv->dma_area_dma, GFP_KERNEL);
>>>
>>> Even though we'll only end up here for 3215 or 3270 consoles, this sent
>>> me looking.
>>>
>>> This code is invoked via console_init(). A few lines down in
>>> start_kernel(), we have
>>>
>>>          /*
>>>           * This needs to be called before any devices perform DMA
>>>           * operations that might use the SWIOTLB bounce buffers. It will
>>>           * mark the bounce buffers as decrypted so that their usage will
>>>           * not cause "plain-text" data to be decrypted when accessed.
>>>           */
>>>          mem_encrypt_init();
>>>
>>> So, I'm wondering if creating the console device interacts in any way
>>> with the memory encryption interface?
>>
>> I do things a bit different than x86: the SWIOTLB stuff is set up in
>> mem_init(). So I think we should be fine. If there is a down-side to
>> calling swiotlb_update_mem_attributes() earlier, honestly I'm
>> not sure.
> 
> Neither am I; do any of the folks who looked at the swiotlb patch have
> an idea?
> 
>>
>>>
>>> [Does basic recognition work if you start a protected virt guest with a
>>> 3270 console? I realize that the console is unlikely to work, but that
>>> should at least exercise this code path.]
>>
>> I've already had some thoughts along these lines and slapped
>> -device x-terminal3270,chardev=char_0,devno=fe.0.000a,id=terminal_0 \
>> on my qemu command line. The ccw device does show up in the guest...
>>
>> Device   Subchan.  DevType CU Type Use  PIM PAM POM  CHPIDs
>> ----------------------------------------------------------------------
>> 0.0.0000 0.0.0000  0000/00 3832/01 yes  80  80  ff   00000000 00000000
>> 0.0.000a 0.0.0001  0000/00 3270/00      80  80  ff   01000000 00000000
>> 0.0.0002 0.0.0002  0000/00 3832/09 yes  80  80  ff   00000000 00000000
>> 0.0.0300 0.0.0003  0000/00 3832/02 yes  80  80  ff   00000000 00000000
>> 0.0.0301 0.0.0004  0000/00 3832/02 yes  80  80  ff   00000000 00000000
>>
>> But I would not call it a comprehensive test...
> 
> If you only add the device, it will show up as a normal ccw device in
> the guest; i.e. device recognition is done at the same time as for the
> other ccw devices. Still good to see that nothing breaks there :)
> 
> To actually make the guest use the 3270 as its console, I guess you
> need to explicitly force it (see
> https://wiki.qemu.org/Features/3270#Using_3270_as_the_console)...
> actually starting the console will almost certainly fail; but you can
> at least check whether device recognition in the console path works.
> 
>>
>> Mimu, do we have something more elaborate with regards to this?

I ran that with success

[root@ap01 ~]# lscss | grep 3270
0.0.002a 0.0.0008  0000/00 3270/00 yes  80  80  ff   01000000 00000000

and was able to connect and login.

Michael

> 
> I don't think we need extensive testing here; just checking that the
> sequence is not fundamentally broken.
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/8] s390/cio: add basic protected virtualization support
  2019-05-29 12:24         ` Michael Mueller
@ 2019-05-29 12:30           ` Cornelia Huck
  0 siblings, 0 replies; 36+ messages in thread
From: Cornelia Huck @ 2019-05-29 12:30 UTC (permalink / raw)
  To: Michael Mueller
  Cc: Halil Pasic, KVM Mailing List, Linux-S390 Mailing List,
	Sebastian Ott, Heiko Carstens, virtualization,
	Michael S . Tsirkin, Christoph Hellwig, Thomas Huth,
	Christian Borntraeger, Viktor Mihajlovski, Vasily Gorbik,
	Janosch Frank, Claudio Imbrenda, Farhan Ali, Eric Farman,
	Pierre Morel

On Wed, 29 May 2019 14:24:39 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> On 27.05.19 15:31, Cornelia Huck wrote:

> > To actually make the guest use the 3270 as its console, I guess you
> > need to explicitly force it (see
> > https://wiki.qemu.org/Features/3270#Using_3270_as_the_console)...
> > actually starting the console will almost certainly fail; but you can
> > at least check whether device recognition in the console path works.
> >   
> >>
> >> Mimu, do we have something more elaborate with regards to this?  
> 
> I ran that with success
> 
> [root@ap01 ~]# lscss | grep 3270
> 0.0.002a 0.0.0008  0000/00 3270/00 yes  80  80  ff   01000000 00000000
> 
> and was able to connect and login.

Oh, cool. I'm actually a bit surprised this works without additional
changes to the 3270 code :)

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2019-05-29 12:30 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
2019-05-23 16:22 ` [PATCH v2 1/8] s390/mm: force swiotlb for " Michael Mueller
2019-05-23 16:22 ` [PATCH v2 2/8] s390/cio: introduce DMA pools to cio Michael Mueller
2019-05-25  9:22   ` Sebastian Ott
2019-05-27 11:26     ` Michael Mueller
2019-05-27  6:57   ` Cornelia Huck
2019-05-27 11:47     ` Halil Pasic
2019-05-27 12:06       ` Cornelia Huck
2019-05-27 12:00     ` Michael Mueller
2019-05-23 16:22 ` [PATCH v2 3/8] s390/cio: add basic protected virtualization support Michael Mueller
2019-05-25  9:44   ` Sebastian Ott
2019-05-27 15:01     ` Michael Mueller
2019-05-27 10:38   ` Cornelia Huck
2019-05-27 12:15     ` Michael Mueller
2019-05-27 12:30     ` Halil Pasic
2019-05-27 13:31       ` Cornelia Huck
2019-05-29 12:24         ` Michael Mueller
2019-05-29 12:30           ` Cornelia Huck
2019-05-23 16:22 ` [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts Michael Mueller
2019-05-25  9:51   ` Sebastian Ott
2019-05-27 10:53   ` Cornelia Huck
2019-05-23 16:22 ` [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors Michael Mueller
2019-05-27 10:55   ` Cornelia Huck
2019-05-27 12:03     ` Halil Pasic
2019-05-23 16:22 ` [PATCH v2 6/8] virtio/s390: add indirection to indicators access Michael Mueller
2019-05-27 11:00   ` Cornelia Huck
2019-05-27 11:57     ` Halil Pasic
2019-05-27 12:10       ` Cornelia Huck
2019-05-29 11:05         ` Michael Mueller
2019-05-23 16:22 ` [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers Michael Mueller
2019-05-27 11:49   ` Cornelia Huck
2019-05-23 16:22 ` [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA Michael Mueller
2019-05-27 12:00   ` Cornelia Huck
2019-05-28 14:33     ` Halil Pasic
2019-05-28 14:56       ` Cornelia Huck
2019-05-28 14:58       ` Michael Mueller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).