qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU
@ 2019-09-07  0:16 Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder Matthew Rosato
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Matthew Rosato @ 2019-09-07  0:16 UTC (permalink / raw)
  To: cohuck
  Cc: walling, alex.williamson, pmorel, david, mst, qemu-devel, pasic,
	borntraeger, qemu-s390x, pbonzini, rth

Note: These patches by Pierre got lost in the ether a few months back
as he has been unavailable to carry them forward.  I've made changes
based upon comments received on the kernel part of his last version.

This patch implement the QEMU part to retrieve ZPCI specific
information from the host.
The Linux part has been posted on a separate patch on the LKML.

Subject: [PATCH v4 0/4] Retrieving zPCI specific info with VFIO
  
We use the PCI VFIO interface to retrieve ZPCI specific information
from the host with a ZPCI specific device region.
 
The retrieval is done only once per function and function group
during the plugging of the device.
The guest's requests are filled with shadow values we keep for
the duration of the device remains plugged.

Still there are some values we need to virtualize, like
the UID and FID of the zPCI function, and we currently
only allow the refresh bit for the zPCI group flags.

Note that we export the CLP specific definitions in a dedicated
file for clarity.

Changes since v2:
- update vfio_zdev.h to match kernel + fix packing attribute
- update vfio.h to match kernel changes

Pierre Morel (5):
  vfio: vfio_iommu_type1: linux header place holder
  s390: PCI: Creation a header dedicated to PCI CLP
  s390: vfio_pci: Use a PCI Group structure
  s390: vfio_pci: Use a PCI Function structure
  s390: vfio_pci: Get zPCI function info from host

 hw/s390x/s390-pci-bus.c         | 145 +++++++++++++++++++++++++--
 hw/s390x/s390-pci-bus.h         |  15 ++-
 hw/s390x/s390-pci-clp.h         | 211 ++++++++++++++++++++++++++++++++++++++++
 hw/s390x/s390-pci-inst.c        |  28 +++---
 hw/s390x/s390-pci-inst.h        | 196 -------------------------------------
 linux-headers/linux/vfio.h      |   7 +-
 linux-headers/linux/vfio_zdev.h |  35 +++++++
 7 files changed, 417 insertions(+), 220 deletions(-)
 create mode 100644 hw/s390x/s390-pci-clp.h
 create mode 100644 linux-headers/linux/vfio_zdev.h

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder
  2019-09-07  0:16 [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU Matthew Rosato
@ 2019-09-07  0:16 ` Matthew Rosato
  2019-09-08 13:08   ` Michael S. Tsirkin
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 2/5] s390: PCI: Creation a header dedicated to PCI CLP Matthew Rosato
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Matthew Rosato @ 2019-09-07  0:16 UTC (permalink / raw)
  To: cohuck
  Cc: walling, alex.williamson, pmorel, david, mst, qemu-devel, pasic,
	borntraeger, qemu-s390x, pbonzini, rth

From: Pierre Morel <pmorel@linux.ibm.com>

This should be copied from Linux kernel UAPI includes.
The version used here is Linux 5.3.0

We define a new device region in vfio.h to be able to
get the ZPCI CLP information by reading this region from
userland.

We create a new file, vfio_zdev.h to define the structure
of the new region we defined in vfio.h

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
---
 linux-headers/linux/vfio.h      |  7 ++++---
 linux-headers/linux/vfio_zdev.h | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+), 3 deletions(-)
 create mode 100644 linux-headers/linux/vfio_zdev.h

diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
index 24f5051..8328c87 100644
--- a/linux-headers/linux/vfio.h
+++ b/linux-headers/linux/vfio.h
@@ -9,8 +9,8 @@
  * it under the terms of the GNU General Public License version 2 as
  * published by the Free Software Foundation.
  */
-#ifndef VFIO_H
-#define VFIO_H
+#ifndef _UAPIVFIO_H
+#define _UAPIVFIO_H
 
 #include <linux/types.h>
 #include <linux/ioctl.h>
@@ -371,6 +371,7 @@ struct vfio_region_gfx_edid {
  * to do TLB invalidation on a GPU.
  */
 #define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
+#define VFIO_REGION_SUBTYPE_ZDEV_CLP		(2)
 
 /*
  * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
@@ -914,4 +915,4 @@ struct vfio_iommu_spapr_tce_remove {
 
 /* ***************************************************************** */
 
-#endif /* VFIO_H */
+#endif /* _UAPIVFIO_H */
diff --git a/linux-headers/linux/vfio_zdev.h b/linux-headers/linux/vfio_zdev.h
new file mode 100644
index 0000000..2b912a5
--- /dev/null
+++ b/linux-headers/linux/vfio_zdev.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Region definition for ZPCI devices
+ *
+ * Copyright IBM Corp. 2019
+ *
+ * Author(s): Pierre Morel <pmorel@linux.ibm.com>
+ */
+
+#ifndef _VFIO_ZDEV_H_
+#define _VFIO_ZDEV_H_
+
+#include <linux/types.h>
+
+/**
+ * struct vfio_region_zpci_info - ZPCI information.
+ *
+ */
+struct vfio_region_zpci_info {
+	__u64 dasm;
+	__u64 start_dma;
+	__u64 end_dma;
+	__u64 msi_addr;
+	__u64 flags;
+	__u16 pchid;
+	__u16 mui;
+	__u16 noi;
+	__u16 maxstbl;
+	__u8 version;
+	__u8 gid;
+#define VFIO_PCI_ZDEV_FLAGS_REFRESH 1
+	__u8 util_str[];
+} __attribute__ ((__packed__));
+
+#endif
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 2/5] s390: PCI: Creation a header dedicated to PCI CLP
  2019-09-07  0:16 [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder Matthew Rosato
@ 2019-09-07  0:16 ` Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure Matthew Rosato
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Matthew Rosato @ 2019-09-07  0:16 UTC (permalink / raw)
  To: cohuck
  Cc: walling, alex.williamson, pmorel, david, mst, qemu-devel, pasic,
	borntraeger, qemu-s390x, pbonzini, rth

From: Pierre Morel <pmorel@linux.ibm.com>

To have a clean separation between s390-pci-bus.h
and s390-pci-inst.h headers we export the PCI CLP
instructions in a dedicated header.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.h  |   1 +
 hw/s390x/s390-pci-clp.h  | 211 +++++++++++++++++++++++++++++++++++++++++++++++
 hw/s390x/s390-pci-inst.h | 196 -------------------------------------------
 3 files changed, 212 insertions(+), 196 deletions(-)
 create mode 100644 hw/s390x/s390-pci-clp.h

diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index 550f3cc..a5d2049 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -19,6 +19,7 @@
 #include "hw/s390x/sclp.h"
 #include "hw/s390x/s390_flic.h"
 #include "hw/s390x/css.h"
+#include "s390-pci-clp.h"
 
 #define TYPE_S390_PCI_HOST_BRIDGE "s390-pcihost"
 #define TYPE_S390_PCI_BUS "s390-pcibus"
diff --git a/hw/s390x/s390-pci-clp.h b/hw/s390x/s390-pci-clp.h
new file mode 100644
index 0000000..e442307
--- /dev/null
+++ b/hw/s390x/s390-pci-clp.h
@@ -0,0 +1,211 @@
+/*
+ * s390 CLPinstruction definitions
+ *
+ * Copyright 2019 IBM Corp.
+ * Author(s): Pierre Morel <pmorel@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#ifndef HW_S390_PCI_CLP
+#define HW_S390_PCI_CLP
+
+/* CLP common request & response block size */
+#define CLP_BLK_SIZE 4096
+#define PCI_BAR_COUNT 6
+#define PCI_MAX_FUNCTIONS 4096
+
+typedef struct ClpReqHdr {
+    uint16_t len;
+    uint16_t cmd;
+} QEMU_PACKED ClpReqHdr;
+
+typedef struct ClpRspHdr {
+    uint16_t len;
+    uint16_t rsp;
+} QEMU_PACKED ClpRspHdr;
+
+/* CLP Response Codes */
+#define CLP_RC_OK         0x0010  /* Command request successfully */
+#define CLP_RC_CMD        0x0020  /* Command code not recognized */
+#define CLP_RC_PERM       0x0030  /* Command not authorized */
+#define CLP_RC_FMT        0x0040  /* Invalid command request format */
+#define CLP_RC_LEN        0x0050  /* Invalid command request length */
+#define CLP_RC_8K         0x0060  /* Command requires 8K LPCB */
+#define CLP_RC_RESNOT0    0x0070  /* Reserved field not zero */
+#define CLP_RC_NODATA     0x0080  /* No data available */
+#define CLP_RC_FC_UNKNOWN 0x0100  /* Function code not recognized */
+
+/*
+ * Call Logical Processor - Command Codes
+ */
+#define CLP_LIST_PCI            0x0002
+#define CLP_QUERY_PCI_FN        0x0003
+#define CLP_QUERY_PCI_FNGRP     0x0004
+#define CLP_SET_PCI_FN          0x0005
+
+/* PCI function handle list entry */
+typedef struct ClpFhListEntry {
+    uint16_t device_id;
+    uint16_t vendor_id;
+#define CLP_FHLIST_MASK_CONFIG 0x80000000
+    uint32_t config;
+    uint32_t fid;
+    uint32_t fh;
+} QEMU_PACKED ClpFhListEntry;
+
+#define CLP_RC_SETPCIFN_FH      0x0101 /* Invalid PCI fn handle */
+#define CLP_RC_SETPCIFN_FHOP    0x0102 /* Fn handle not valid for op */
+#define CLP_RC_SETPCIFN_DMAAS   0x0103 /* Invalid DMA addr space */
+#define CLP_RC_SETPCIFN_RES     0x0104 /* Insufficient resources */
+#define CLP_RC_SETPCIFN_ALRDY   0x0105 /* Fn already in requested state */
+#define CLP_RC_SETPCIFN_ERR     0x0106 /* Fn in permanent error state */
+#define CLP_RC_SETPCIFN_RECPND  0x0107 /* Error recovery pending */
+#define CLP_RC_SETPCIFN_BUSY    0x0108 /* Fn busy */
+#define CLP_RC_LISTPCI_BADRT    0x010a /* Resume token not recognized */
+#define CLP_RC_QUERYPCIFG_PFGID 0x010b /* Unrecognized PFGID */
+
+/* request or response block header length */
+#define LIST_PCI_HDR_LEN 32
+
+/* Number of function handles fitting in response block */
+#define CLP_FH_LIST_NR_ENTRIES \
+    ((CLP_BLK_SIZE - 2 * LIST_PCI_HDR_LEN) \
+        / sizeof(ClpFhListEntry))
+
+#define CLP_SET_ENABLE_PCI_FN  0 /* Yes, 0 enables it */
+#define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */
+
+#define CLP_UTIL_STR_LEN 64
+
+#define CLP_MASK_FMT 0xf0000000
+
+/* List PCI functions request */
+typedef struct ClpReqListPci {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint64_t resume_token;
+    uint64_t reserved2;
+} QEMU_PACKED ClpReqListPci;
+
+/* List PCI functions response */
+typedef struct ClpRspListPci {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint64_t resume_token;
+    uint32_t mdd;
+    uint16_t max_fn;
+    uint8_t flags;
+    uint8_t entry_size;
+    ClpFhListEntry fh_list[CLP_FH_LIST_NR_ENTRIES];
+} QEMU_PACKED ClpRspListPci;
+
+/* Query PCI function request */
+typedef struct ClpReqQueryPci {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint32_t fh; /* function handle */
+    uint32_t reserved2;
+    uint64_t reserved3;
+} QEMU_PACKED ClpReqQueryPci;
+
+/* Query PCI function response */
+typedef struct ClpRspQueryPci {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint16_t vfn; /* virtual fn number */
+#define CLP_RSP_QPCI_MASK_UTIL  0x100
+#define CLP_RSP_QPCI_MASK_PFGID 0xff
+    uint16_t ug;
+    uint32_t fid; /* pci function id */
+    uint8_t bar_size[PCI_BAR_COUNT];
+    uint16_t pchid;
+    uint32_t bar[PCI_BAR_COUNT];
+    uint64_t reserved2;
+    uint64_t sdma; /* start dma as */
+    uint64_t edma; /* end dma as */
+    uint32_t reserved3[11];
+    uint32_t uid;
+    uint8_t util_str[CLP_UTIL_STR_LEN]; /* utility string */
+} QEMU_PACKED ClpRspQueryPci;
+
+/* Query PCI function group request */
+typedef struct ClpReqQueryPciGrp {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+#define CLP_REQ_QPCIG_MASK_PFGID 0xff
+    uint32_t g;
+    uint32_t reserved2;
+    uint64_t reserved3;
+} QEMU_PACKED ClpReqQueryPciGrp;
+
+/* Query PCI function group response */
+typedef struct ClpRspQueryPciGrp {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+#define CLP_RSP_QPCIG_MASK_NOI 0xfff
+    uint16_t i;
+    uint8_t version;
+#define CLP_RSP_QPCIG_MASK_FRAME   0x2
+#define CLP_RSP_QPCIG_MASK_REFRESH 0x1
+    uint8_t fr;
+    uint16_t maxstbl;
+    uint16_t mui;
+    uint64_t reserved3;
+    uint64_t dasm; /* dma address space mask */
+    uint64_t msia; /* MSI address */
+    uint64_t reserved4;
+    uint64_t reserved5;
+} QEMU_PACKED ClpRspQueryPciGrp;
+
+/* Set PCI function request */
+typedef struct ClpReqSetPci {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint32_t fh; /* function handle */
+    uint16_t reserved2;
+    uint8_t oc; /* operation controls */
+    uint8_t ndas; /* number of dma spaces */
+    uint64_t reserved3;
+} QEMU_PACKED ClpReqSetPci;
+
+/* Set PCI function response */
+typedef struct ClpRspSetPci {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint32_t fh; /* function handle */
+    uint32_t reserved3;
+    uint64_t reserved4;
+} QEMU_PACKED ClpRspSetPci;
+
+typedef struct ClpReqRspListPci {
+    ClpReqListPci request;
+    ClpRspListPci response;
+} QEMU_PACKED ClpReqRspListPci;
+
+typedef struct ClpReqRspSetPci {
+    ClpReqSetPci request;
+    ClpRspSetPci response;
+} QEMU_PACKED ClpReqRspSetPci;
+
+typedef struct ClpReqRspQueryPci {
+    ClpReqQueryPci request;
+    ClpRspQueryPci response;
+} QEMU_PACKED ClpReqRspQueryPci;
+
+typedef struct ClpReqRspQueryPciGrp {
+    ClpReqQueryPciGrp request;
+    ClpRspQueryPciGrp response;
+} QEMU_PACKED ClpReqRspQueryPciGrp;
+
+#endif
diff --git a/hw/s390x/s390-pci-inst.h b/hw/s390x/s390-pci-inst.h
index fa3bf8b..6c4273a 100644
--- a/hw/s390x/s390-pci-inst.h
+++ b/hw/s390x/s390-pci-inst.h
@@ -17,202 +17,6 @@
 #include "s390-pci-bus.h"
 #include "sysemu/dma.h"
 
-/* CLP common request & response block size */
-#define CLP_BLK_SIZE 4096
-#define PCI_BAR_COUNT 6
-#define PCI_MAX_FUNCTIONS 4096
-
-typedef struct ClpReqHdr {
-    uint16_t len;
-    uint16_t cmd;
-} QEMU_PACKED ClpReqHdr;
-
-typedef struct ClpRspHdr {
-    uint16_t len;
-    uint16_t rsp;
-} QEMU_PACKED ClpRspHdr;
-
-/* CLP Response Codes */
-#define CLP_RC_OK         0x0010  /* Command request successfully */
-#define CLP_RC_CMD        0x0020  /* Command code not recognized */
-#define CLP_RC_PERM       0x0030  /* Command not authorized */
-#define CLP_RC_FMT        0x0040  /* Invalid command request format */
-#define CLP_RC_LEN        0x0050  /* Invalid command request length */
-#define CLP_RC_8K         0x0060  /* Command requires 8K LPCB */
-#define CLP_RC_RESNOT0    0x0070  /* Reserved field not zero */
-#define CLP_RC_NODATA     0x0080  /* No data available */
-#define CLP_RC_FC_UNKNOWN 0x0100  /* Function code not recognized */
-
-/*
- * Call Logical Processor - Command Codes
- */
-#define CLP_LIST_PCI            0x0002
-#define CLP_QUERY_PCI_FN        0x0003
-#define CLP_QUERY_PCI_FNGRP     0x0004
-#define CLP_SET_PCI_FN          0x0005
-
-/* PCI function handle list entry */
-typedef struct ClpFhListEntry {
-    uint16_t device_id;
-    uint16_t vendor_id;
-#define CLP_FHLIST_MASK_CONFIG 0x80000000
-    uint32_t config;
-    uint32_t fid;
-    uint32_t fh;
-} QEMU_PACKED ClpFhListEntry;
-
-#define CLP_RC_SETPCIFN_FH      0x0101 /* Invalid PCI fn handle */
-#define CLP_RC_SETPCIFN_FHOP    0x0102 /* Fn handle not valid for op */
-#define CLP_RC_SETPCIFN_DMAAS   0x0103 /* Invalid DMA addr space */
-#define CLP_RC_SETPCIFN_RES     0x0104 /* Insufficient resources */
-#define CLP_RC_SETPCIFN_ALRDY   0x0105 /* Fn already in requested state */
-#define CLP_RC_SETPCIFN_ERR     0x0106 /* Fn in permanent error state */
-#define CLP_RC_SETPCIFN_RECPND  0x0107 /* Error recovery pending */
-#define CLP_RC_SETPCIFN_BUSY    0x0108 /* Fn busy */
-#define CLP_RC_LISTPCI_BADRT    0x010a /* Resume token not recognized */
-#define CLP_RC_QUERYPCIFG_PFGID 0x010b /* Unrecognized PFGID */
-
-/* request or response block header length */
-#define LIST_PCI_HDR_LEN 32
-
-/* Number of function handles fitting in response block */
-#define CLP_FH_LIST_NR_ENTRIES \
-    ((CLP_BLK_SIZE - 2 * LIST_PCI_HDR_LEN) \
-        / sizeof(ClpFhListEntry))
-
-#define CLP_SET_ENABLE_PCI_FN  0 /* Yes, 0 enables it */
-#define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */
-
-#define CLP_UTIL_STR_LEN 64
-
-#define CLP_MASK_FMT 0xf0000000
-
-/* List PCI functions request */
-typedef struct ClpReqListPci {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint64_t resume_token;
-    uint64_t reserved2;
-} QEMU_PACKED ClpReqListPci;
-
-/* List PCI functions response */
-typedef struct ClpRspListPci {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint64_t resume_token;
-    uint32_t mdd;
-    uint16_t max_fn;
-    uint8_t flags;
-    uint8_t entry_size;
-    ClpFhListEntry fh_list[CLP_FH_LIST_NR_ENTRIES];
-} QEMU_PACKED ClpRspListPci;
-
-/* Query PCI function request */
-typedef struct ClpReqQueryPci {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint32_t fh; /* function handle */
-    uint32_t reserved2;
-    uint64_t reserved3;
-} QEMU_PACKED ClpReqQueryPci;
-
-/* Query PCI function response */
-typedef struct ClpRspQueryPci {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint16_t vfn; /* virtual fn number */
-#define CLP_RSP_QPCI_MASK_UTIL  0x100
-#define CLP_RSP_QPCI_MASK_PFGID 0xff
-    uint16_t ug;
-    uint32_t fid; /* pci function id */
-    uint8_t bar_size[PCI_BAR_COUNT];
-    uint16_t pchid;
-    uint32_t bar[PCI_BAR_COUNT];
-    uint64_t reserved2;
-    uint64_t sdma; /* start dma as */
-    uint64_t edma; /* end dma as */
-    uint32_t reserved3[11];
-    uint32_t uid;
-    uint8_t util_str[CLP_UTIL_STR_LEN]; /* utility string */
-} QEMU_PACKED ClpRspQueryPci;
-
-/* Query PCI function group request */
-typedef struct ClpReqQueryPciGrp {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-#define CLP_REQ_QPCIG_MASK_PFGID 0xff
-    uint32_t g;
-    uint32_t reserved2;
-    uint64_t reserved3;
-} QEMU_PACKED ClpReqQueryPciGrp;
-
-/* Query PCI function group response */
-typedef struct ClpRspQueryPciGrp {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-#define CLP_RSP_QPCIG_MASK_NOI 0xfff
-    uint16_t i;
-    uint8_t version;
-#define CLP_RSP_QPCIG_MASK_FRAME   0x2
-#define CLP_RSP_QPCIG_MASK_REFRESH 0x1
-    uint8_t fr;
-    uint16_t maxstbl;
-    uint16_t mui;
-    uint64_t reserved3;
-    uint64_t dasm; /* dma address space mask */
-    uint64_t msia; /* MSI address */
-    uint64_t reserved4;
-    uint64_t reserved5;
-} QEMU_PACKED ClpRspQueryPciGrp;
-
-/* Set PCI function request */
-typedef struct ClpReqSetPci {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint32_t fh; /* function handle */
-    uint16_t reserved2;
-    uint8_t oc; /* operation controls */
-    uint8_t ndas; /* number of dma spaces */
-    uint64_t reserved3;
-} QEMU_PACKED ClpReqSetPci;
-
-/* Set PCI function response */
-typedef struct ClpRspSetPci {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint32_t fh; /* function handle */
-    uint32_t reserved3;
-    uint64_t reserved4;
-} QEMU_PACKED ClpRspSetPci;
-
-typedef struct ClpReqRspListPci {
-    ClpReqListPci request;
-    ClpRspListPci response;
-} QEMU_PACKED ClpReqRspListPci;
-
-typedef struct ClpReqRspSetPci {
-    ClpReqSetPci request;
-    ClpRspSetPci response;
-} QEMU_PACKED ClpReqRspSetPci;
-
-typedef struct ClpReqRspQueryPci {
-    ClpReqQueryPci request;
-    ClpRspQueryPci response;
-} QEMU_PACKED ClpReqRspQueryPci;
-
-typedef struct ClpReqRspQueryPciGrp {
-    ClpReqQueryPciGrp request;
-    ClpRspQueryPciGrp response;
-} QEMU_PACKED ClpReqRspQueryPciGrp;
-
 /* Load/Store status codes */
 #define ZPCI_PCI_ST_FUNC_NOT_ENABLED        4
 #define ZPCI_PCI_ST_FUNC_IN_ERR             8
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure
  2019-09-07  0:16 [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 2/5] s390: PCI: Creation a header dedicated to PCI CLP Matthew Rosato
@ 2019-09-07  0:16 ` Matthew Rosato
  2019-09-09  5:18   ` [Qemu-devel] [qemu-s390x] " Thomas Huth
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 4/5] s390: vfio_pci: Use a PCI Function structure Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 5/5] s390: vfio_pci: Get zPCI function info from host Matthew Rosato
  4 siblings, 1 reply; 9+ messages in thread
From: Matthew Rosato @ 2019-09-07  0:16 UTC (permalink / raw)
  To: cohuck
  Cc: walling, alex.williamson, pmorel, david, mst, qemu-devel, pasic,
	borntraeger, qemu-s390x, pbonzini, rth

From: Pierre Morel <pmorel@linux.ibm.com>

We use a S390PCIGroup structure to hold the information
related to zPCI Function group.

This allows us to be ready to support multiple groups and to retrieve
the group information from the host.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.c  | 42 ++++++++++++++++++++++++++++++++++++++++++
 hw/s390x/s390-pci-bus.h  | 11 ++++++++++-
 hw/s390x/s390-pci-inst.c | 22 +++++++++++++---------
 3 files changed, 65 insertions(+), 10 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 963a41c..e625217 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -730,6 +730,46 @@ static void s390_pci_iommu_free(S390pciState *s, PCIBus *bus, int32_t devfn)
     object_unref(OBJECT(iommu));
 }
 
+static S390PCIGroup *s390_grp_create(int ug)
+{
+    S390PCIGroup *grp;
+    S390pciState *s = s390_get_phb();
+
+    grp = g_new0(S390PCIGroup, 1);
+    grp->ug = ug;
+    QTAILQ_INSERT_TAIL(&s->zpci_grps, grp, link);
+    return grp;
+}
+
+S390PCIGroup *s390_grp_find(int ug)
+{
+    S390PCIGroup *grp;
+    S390pciState *s = s390_get_phb();
+
+    QTAILQ_FOREACH(grp, &s->zpci_grps, link) {
+        if ((grp->ug & CLP_REQ_QPCIG_MASK_PFGID) == ug) {
+            return grp;
+        }
+    }
+    return NULL;
+}
+
+static void s390_pci_init_default_group(void)
+{
+    S390PCIGroup *grp;
+    ClpRspQueryPciGrp *resgrp;
+
+    grp = s390_grp_create(ZPCI_DEFAULT_FN_GRP);
+    resgrp = &grp->zpci_grp;
+    resgrp->fr = 1;
+    stq_p(&resgrp->dasm, 0);
+    stq_p(&resgrp->msia, ZPCI_MSI_ADDR);
+    stw_p(&resgrp->mui, DEFAULT_MUI);
+    stw_p(&resgrp->i, 128);
+    stw_p(&resgrp->maxstbl, 128);
+    resgrp->version = 0;
+}
+
 static void s390_pcihost_realize(DeviceState *dev, Error **errp)
 {
     PCIBus *b;
@@ -766,7 +806,9 @@ static void s390_pcihost_realize(DeviceState *dev, Error **errp)
     s->bus_no = 0;
     QTAILQ_INIT(&s->pending_sei);
     QTAILQ_INIT(&s->zpci_devs);
+    QTAILQ_INIT(&s->zpci_grps);
 
+    s390_pci_init_default_group();
     css_register_io_adapters(CSS_IO_ADAPTER_PCI, true, false,
                              S390_ADAPTER_SUPPRESSIBLE, &local_err);
     error_propagate(errp, local_err);
diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index a5d2049..e95a797 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -312,6 +312,14 @@ typedef struct ZpciFmb {
 } ZpciFmb;
 QEMU_BUILD_BUG_MSG(offsetof(ZpciFmb, fmt0) != 48, "padding in ZpciFmb");
 
+#define ZPCI_DEFAULT_FN_GRP 0x20
+typedef struct S390PCIGroup {
+    ClpRspQueryPciGrp zpci_grp;
+    int ug;
+    QTAILQ_ENTRY(S390PCIGroup) link;
+} S390PCIGroup;
+S390PCIGroup *s390_grp_find(int ug);
+
 struct S390PCIBusDevice {
     DeviceState qdev;
     PCIDevice *pdev;
@@ -327,8 +335,8 @@ struct S390PCIBusDevice {
     QEMUTimer *fmb_timer;
     uint8_t isc;
     uint16_t noi;
-    uint16_t maxstbl;
     uint8_t sum;
+    S390PCIGroup *pci_grp;
     S390MsixInfo msix;
     AdapterRoutes routes;
     S390PCIIOMMU *iommu;
@@ -353,6 +361,7 @@ typedef struct S390pciState {
     GHashTable *zpci_table;
     QTAILQ_HEAD(, SeiContainer) pending_sei;
     QTAILQ_HEAD(, S390PCIBusDevice) zpci_devs;
+    QTAILQ_HEAD(, S390PCIGroup) zpci_grps;
 } S390pciState;
 
 S390pciState *s390_get_phb(void);
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 4b3bd4a..00dd176 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -284,21 +284,25 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
         stq_p(&resquery->edma, ZPCI_EDMA_ADDR);
         stl_p(&resquery->fid, pbdev->fid);
         stw_p(&resquery->pchid, 0);
-        stw_p(&resquery->ug, 1);
+        stw_p(&resquery->ug, ZPCI_DEFAULT_FN_GRP);
         stl_p(&resquery->uid, pbdev->uid);
         stw_p(&resquery->hdr.rsp, CLP_RC_OK);
         break;
     }
     case CLP_QUERY_PCI_FNGRP: {
         ClpRspQueryPciGrp *resgrp = (ClpRspQueryPciGrp *)resh;
-        resgrp->fr = 1;
-        stq_p(&resgrp->dasm, 0);
-        stq_p(&resgrp->msia, ZPCI_MSI_ADDR);
-        stw_p(&resgrp->mui, DEFAULT_MUI);
-        stw_p(&resgrp->i, 128);
-        stw_p(&resgrp->maxstbl, 128);
-        resgrp->version = 0;
 
+        ClpReqQueryPciGrp *reqgrp = (ClpReqQueryPciGrp *)reqh;
+        S390PCIGroup *grp;
+
+        grp = s390_grp_find(reqgrp->g);
+        if (!grp) {
+            /* We do not allow access to unknown groups */
+            /* The group must have been obtained with a vfio device */
+            stw_p(&resgrp->hdr.rsp, CLP_RC_QUERYPCIFG_PFGID);
+            goto out;
+        }
+        memcpy(resgrp, &grp->zpci_grp, sizeof(ClpRspQueryPciGrp));
         stw_p(&resgrp->hdr.rsp, CLP_RC_OK);
         break;
     }
@@ -754,7 +758,7 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
     }
     /* Length must be greater than 8, a multiple of 8 */
     /* and not greater than maxstbl */
-    if ((len <= 8) || (len % 8) || (len > pbdev->maxstbl)) {
+    if ((len <= 8) || (len % 8) || (len > pbdev->pci_grp->zpci_grp.maxstbl)) {
         goto specification_error;
     }
     /* Do not cross a 4K-byte boundary */
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 4/5] s390: vfio_pci: Use a PCI Function structure
  2019-09-07  0:16 [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU Matthew Rosato
                   ` (2 preceding siblings ...)
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure Matthew Rosato
@ 2019-09-07  0:16 ` Matthew Rosato
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 5/5] s390: vfio_pci: Get zPCI function info from host Matthew Rosato
  4 siblings, 0 replies; 9+ messages in thread
From: Matthew Rosato @ 2019-09-07  0:16 UTC (permalink / raw)
  To: cohuck
  Cc: walling, alex.williamson, pmorel, david, mst, qemu-devel, pasic,
	borntraeger, qemu-s390x, pbonzini, rth

From: Pierre Morel <pmorel@linux.ibm.com>

We use a ClpRspQueryPci structure to hold the information
related to zPCI Function.

This allows us to be ready to support different zPCI functions
and to retrieve the zPCI function information from the host.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.c  | 22 +++++++++++++++++-----
 hw/s390x/s390-pci-bus.h  |  1 +
 hw/s390x/s390-pci-inst.c |  8 ++------
 3 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index e625217..0d404c3 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -770,6 +770,17 @@ static void s390_pci_init_default_group(void)
     resgrp->version = 0;
 }
 
+static void set_pbdev_info(S390PCIBusDevice *pbdev)
+{
+    pbdev->zpci_fn.sdma = ZPCI_SDMA_ADDR;
+    pbdev->zpci_fn.edma = ZPCI_EDMA_ADDR;
+    pbdev->zpci_fn.pchid = 0;
+    pbdev->zpci_fn.ug = ZPCI_DEFAULT_FN_GRP;
+    pbdev->zpci_fn.fid = pbdev->fid;
+    pbdev->zpci_fn.uid = pbdev->uid;
+    pbdev->pci_grp = s390_grp_find(ZPCI_DEFAULT_FN_GRP);
+}
+
 static void s390_pcihost_realize(DeviceState *dev, Error **errp)
 {
     PCIBus *b;
@@ -988,17 +999,18 @@ static void s390_pcihost_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
             }
         }
 
+        pbdev->pdev = pdev;
+        pbdev->iommu = s390_pci_get_iommu(s, pci_get_bus(pdev), pdev->devfn);
+        pbdev->iommu->pbdev = pbdev;
+        pbdev->state = ZPCI_FS_DISABLED;
+        set_pbdev_info(pbdev);
+
         if (object_dynamic_cast(OBJECT(dev), "vfio-pci")) {
             pbdev->fh |= FH_SHM_VFIO;
         } else {
             pbdev->fh |= FH_SHM_EMUL;
         }
 
-        pbdev->pdev = pdev;
-        pbdev->iommu = s390_pci_get_iommu(s, pci_get_bus(pdev), pdev->devfn);
-        pbdev->iommu->pbdev = pbdev;
-        pbdev->state = ZPCI_FS_DISABLED;
-
         if (s390_pci_msix_init(pbdev)) {
             error_setg(errp, "MSI-X support is mandatory "
                        "in the S390 architecture");
diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index e95a797..8c969d1 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -337,6 +337,7 @@ struct S390PCIBusDevice {
     uint16_t noi;
     uint8_t sum;
     S390PCIGroup *pci_grp;
+    ClpRspQueryPci zpci_fn;
     S390MsixInfo msix;
     AdapterRoutes routes;
     S390PCIIOMMU *iommu;
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 00dd176..b02c360 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -267,6 +267,8 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
             goto out;
         }
 
+        memcpy(resquery, &pbdev->zpci_fn, sizeof(*resquery));
+
         for (i = 0; i < PCI_BAR_COUNT; i++) {
             uint32_t data = pci_get_long(pbdev->pdev->config +
                 PCI_BASE_ADDRESS_0 + (i * 4));
@@ -280,12 +282,6 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
                     resquery->bar_size[i]);
         }
 
-        stq_p(&resquery->sdma, ZPCI_SDMA_ADDR);
-        stq_p(&resquery->edma, ZPCI_EDMA_ADDR);
-        stl_p(&resquery->fid, pbdev->fid);
-        stw_p(&resquery->pchid, 0);
-        stw_p(&resquery->ug, ZPCI_DEFAULT_FN_GRP);
-        stl_p(&resquery->uid, pbdev->uid);
         stw_p(&resquery->hdr.rsp, CLP_RC_OK);
         break;
     }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH v3 5/5] s390: vfio_pci: Get zPCI function info from host
  2019-09-07  0:16 [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU Matthew Rosato
                   ` (3 preceding siblings ...)
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 4/5] s390: vfio_pci: Use a PCI Function structure Matthew Rosato
@ 2019-09-07  0:16 ` Matthew Rosato
  4 siblings, 0 replies; 9+ messages in thread
From: Matthew Rosato @ 2019-09-07  0:16 UTC (permalink / raw)
  To: cohuck
  Cc: walling, alex.williamson, pmorel, david, mst, qemu-devel, pasic,
	borntraeger, qemu-s390x, pbonzini, rth

From: Pierre Morel <pmorel@linux.ibm.com>

We use the VFIO_REGION_SUBTYPE_ZDEV_CLP subregion of
PCI_VENDOR_ID_IBM to retrieve the CLP information the
kernel exports.

To be compatible with previous kernel versions we fall back
on previous predefined values, same as the emulation values,
when the region is not found or when any problem happens
during the search for the information.

Once we retrieved the host device information, we take care to
- use the virtual UID and FID
- Disable all the IOMMU flags we do not support yet.
  Just keeping the refresh bit.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++--
 hw/s390x/s390-pci-bus.h |  2 ++
 2 files changed, 83 insertions(+), 2 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 0d404c3..5069795 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -15,6 +15,8 @@
 #include "qapi/error.h"
 #include "qapi/visitor.h"
 #include "cpu.h"
+#include "s390-pci-clp.h"
+#include <linux/vfio_zdev.h>
 #include "s390-pci-bus.h"
 #include "s390-pci-inst.h"
 #include "hw/pci/pci_bus.h"
@@ -24,6 +26,9 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 
+#include "hw/vfio/pci.h"
+#include <sys/ioctl.h>
+
 #ifndef DEBUG_S390PCI_BUS
 #define DEBUG_S390PCI_BUS  0
 #endif
@@ -781,6 +786,76 @@ static void set_pbdev_info(S390PCIBusDevice *pbdev)
     pbdev->pci_grp = s390_grp_find(ZPCI_DEFAULT_FN_GRP);
 }
 
+static int get_pbdev_info(S390PCIBusDevice *pbdev)
+{
+    VFIOPCIDevice *vfio_pci;
+    VFIODevice *vdev;
+    struct vfio_region_info *info;
+    CLPRegion *clp_region;
+    int size;
+    int ret;
+
+    vfio_pci = container_of(pbdev->pdev, VFIOPCIDevice, pdev);
+    vdev = &vfio_pci->vbasedev;
+
+    if (vdev->num_regions < VFIO_PCI_NUM_REGIONS + 1) {
+        /* Fall back to old handling */
+        return -ENODEV;
+    }
+
+    ret = vfio_get_dev_region_info(vdev,
+                                   PCI_VENDOR_ID_IBM |
+                                   VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+                                   VFIO_REGION_SUBTYPE_ZDEV_CLP, &info);
+    if (ret) {
+        /* Fall back to old handling */
+        return -EIO;
+    }
+
+    if (info->size != (sizeof(CLPRegion) + CLP_UTIL_STR_LEN)) {
+        /* Fall back to old handling */
+        g_free(info);
+        return -ENOMEM;
+    }
+    clp_region = g_malloc0(sizeof(*clp_region) + CLP_UTIL_STR_LEN);
+    size = pread(vdev->fd, clp_region, (sizeof(*clp_region) + CLP_UTIL_STR_LEN),
+                 info->offset);
+    if (size != (sizeof(*clp_region) + CLP_UTIL_STR_LEN)) {
+        goto end;
+    }
+
+    pbdev->zpci_fn.fid = pbdev->fid;
+    pbdev->zpci_fn.uid = pbdev->uid;
+    pbdev->zpci_fn.sdma = clp_region->start_dma;
+    pbdev->zpci_fn.edma = clp_region->end_dma;
+    pbdev->zpci_fn.pchid = clp_region->pchid;
+    pbdev->zpci_fn.ug = clp_region->gid;
+    pbdev->pci_grp = s390_grp_find(clp_region->gid);
+
+    if (!pbdev->pci_grp) {
+        ClpRspQueryPciGrp *resgrp;
+
+        pbdev->pci_grp = s390_grp_create(clp_region->gid);
+
+        resgrp = &pbdev->pci_grp->zpci_grp;
+        if (clp_region->flags & VFIO_PCI_ZDEV_FLAGS_REFRESH) {
+            resgrp->fr = 1;
+        }
+        stq_p(&resgrp->dasm, clp_region->dasm);
+        stq_p(&resgrp->msia, clp_region->msi_addr);
+        stw_p(&resgrp->mui, clp_region->mui);
+        stw_p(&resgrp->i, clp_region->noi);
+        /* These two must be queried in a next iteration */
+        stw_p(&resgrp->maxstbl, 128);
+        resgrp->version = 0;
+    }
+
+end:
+    g_free(info);
+    g_free(clp_region);
+    return ret;
+}
+
 static void s390_pcihost_realize(DeviceState *dev, Error **errp)
 {
     PCIBus *b;
@@ -853,7 +928,8 @@ static int s390_pci_msix_init(S390PCIBusDevice *pbdev)
     name = g_strdup_printf("msix-s390-%04x", pbdev->uid);
     memory_region_init_io(&pbdev->msix_notify_mr, OBJECT(pbdev),
                           &s390_msi_ctrl_ops, pbdev, name, PAGE_SIZE);
-    memory_region_add_subregion(&pbdev->iommu->mr, ZPCI_MSI_ADDR,
+    memory_region_add_subregion(&pbdev->iommu->mr,
+                                pbdev->pci_grp->zpci_grp.msia,
                                 &pbdev->msix_notify_mr);
     g_free(name);
 
@@ -1003,12 +1079,15 @@ static void s390_pcihost_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
         pbdev->iommu = s390_pci_get_iommu(s, pci_get_bus(pdev), pdev->devfn);
         pbdev->iommu->pbdev = pbdev;
         pbdev->state = ZPCI_FS_DISABLED;
-        set_pbdev_info(pbdev);
 
         if (object_dynamic_cast(OBJECT(dev), "vfio-pci")) {
             pbdev->fh |= FH_SHM_VFIO;
+            if (get_pbdev_info(pbdev) != 0) {
+                set_pbdev_info(pbdev);
+            }
         } else {
             pbdev->fh |= FH_SHM_EMUL;
+            set_pbdev_info(pbdev);
         }
 
         if (s390_pci_msix_init(pbdev)) {
diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index 8c969d1..151e2d0 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -320,6 +320,8 @@ typedef struct S390PCIGroup {
 } S390PCIGroup;
 S390PCIGroup *s390_grp_find(int ug);
 
+typedef struct vfio_region_zpci_info CLPRegion;
+
 struct S390PCIBusDevice {
     DeviceState qdev;
     PCIDevice *pdev;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder Matthew Rosato
@ 2019-09-08 13:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2019-09-08 13:08 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: walling, alex.williamson, pmorel, david, cohuck, qemu-devel,
	pasic, borntraeger, qemu-s390x, pbonzini, rth

On Fri, Sep 06, 2019 at 08:16:25PM -0400, Matthew Rosato wrote:
> From: Pierre Morel <pmorel@linux.ibm.com>
> 
> This should be copied from Linux kernel UAPI includes.
> The version used here is Linux 5.3.0
> 
> We define a new device region in vfio.h to be able to
> get the ZPCI CLP information by reading this region from
> userland.
> 
> We create a new file, vfio_zdev.h to define the structure
> of the new region we defined in vfio.h
> 
> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>


You should add these in scripts/update-linux-headers.sh ,
then run that script.

> ---
>  linux-headers/linux/vfio.h      |  7 ++++---
>  linux-headers/linux/vfio_zdev.h | 35 +++++++++++++++++++++++++++++++++++
>  2 files changed, 39 insertions(+), 3 deletions(-)
>  create mode 100644 linux-headers/linux/vfio_zdev.h
> 
> diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
> index 24f5051..8328c87 100644
> --- a/linux-headers/linux/vfio.h
> +++ b/linux-headers/linux/vfio.h
> @@ -9,8 +9,8 @@
>   * it under the terms of the GNU General Public License version 2 as
>   * published by the Free Software Foundation.
>   */
> -#ifndef VFIO_H
> -#define VFIO_H
> +#ifndef _UAPIVFIO_H
> +#define _UAPIVFIO_H
>  
>  #include <linux/types.h>
>  #include <linux/ioctl.h>
> @@ -371,6 +371,7 @@ struct vfio_region_gfx_edid {
>   * to do TLB invalidation on a GPU.
>   */
>  #define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> +#define VFIO_REGION_SUBTYPE_ZDEV_CLP		(2)
>  
>  /*
>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> @@ -914,4 +915,4 @@ struct vfio_iommu_spapr_tce_remove {
>  
>  /* ***************************************************************** */
>  
> -#endif /* VFIO_H */
> +#endif /* _UAPIVFIO_H */
> diff --git a/linux-headers/linux/vfio_zdev.h b/linux-headers/linux/vfio_zdev.h
> new file mode 100644
> index 0000000..2b912a5
> --- /dev/null
> +++ b/linux-headers/linux/vfio_zdev.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Region definition for ZPCI devices
> + *
> + * Copyright IBM Corp. 2019
> + *
> + * Author(s): Pierre Morel <pmorel@linux.ibm.com>
> + */
> +
> +#ifndef _VFIO_ZDEV_H_
> +#define _VFIO_ZDEV_H_
> +
> +#include <linux/types.h>
> +
> +/**
> + * struct vfio_region_zpci_info - ZPCI information.
> + *
> + */
> +struct vfio_region_zpci_info {
> +	__u64 dasm;
> +	__u64 start_dma;
> +	__u64 end_dma;
> +	__u64 msi_addr;
> +	__u64 flags;
> +	__u16 pchid;
> +	__u16 mui;
> +	__u16 noi;
> +	__u16 maxstbl;
> +	__u8 version;
> +	__u8 gid;
> +#define VFIO_PCI_ZDEV_FLAGS_REFRESH 1
> +	__u8 util_str[];
> +} __attribute__ ((__packed__));
> +
> +#endif
> -- 
> 1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [qemu-s390x] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure
  2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure Matthew Rosato
@ 2019-09-09  5:18   ` Thomas Huth
  2019-09-09 16:21     ` Matthew Rosato
  0 siblings, 1 reply; 9+ messages in thread
From: Thomas Huth @ 2019-09-09  5:18 UTC (permalink / raw)
  To: Matthew Rosato, cohuck
  Cc: walling, pmorel, mst, qemu-s390x, david, qemu-devel, pasic,
	borntraeger, alex.williamson, pbonzini, rth

On 07/09/2019 02.16, Matthew Rosato wrote:
> From: Pierre Morel <pmorel@linux.ibm.com>
> 
> We use a S390PCIGroup structure to hold the information
> related to zPCI Function group.
> 
> This allows us to be ready to support multiple groups and to retrieve
> the group information from the host.
> 
> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
> ---
>  hw/s390x/s390-pci-bus.c  | 42 ++++++++++++++++++++++++++++++++++++++++++
>  hw/s390x/s390-pci-bus.h  | 11 ++++++++++-
>  hw/s390x/s390-pci-inst.c | 22 +++++++++++++---------
>  3 files changed, 65 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index 963a41c..e625217 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -730,6 +730,46 @@ static void s390_pci_iommu_free(S390pciState *s, PCIBus *bus, int32_t devfn)
>      object_unref(OBJECT(iommu));
>  }
>  
> +static S390PCIGroup *s390_grp_create(int ug)
> +{
> +    S390PCIGroup *grp;
> +    S390pciState *s = s390_get_phb();
> +
> +    grp = g_new0(S390PCIGroup, 1);
> +    grp->ug = ug;
> +    QTAILQ_INSERT_TAIL(&s->zpci_grps, grp, link);
> +    return grp;
> +}

Maybe an ignorant question, but shouldn't there also be some kind of
clean up function that also frees the memory again, e.g. during a
machine reset? Or are these groups supposed to survive a machine reset?

 Thomas


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [qemu-s390x] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure
  2019-09-09  5:18   ` [Qemu-devel] [qemu-s390x] " Thomas Huth
@ 2019-09-09 16:21     ` Matthew Rosato
  0 siblings, 0 replies; 9+ messages in thread
From: Matthew Rosato @ 2019-09-09 16:21 UTC (permalink / raw)
  To: Thomas Huth, cohuck
  Cc: walling, pmorel, mst, qemu-s390x, david, qemu-devel, pasic,
	borntraeger, alex.williamson, pbonzini, rth

On 9/9/19 1:18 AM, Thomas Huth wrote:
> On 07/09/2019 02.16, Matthew Rosato wrote:
>> From: Pierre Morel <pmorel@linux.ibm.com>
>>
>> We use a S390PCIGroup structure to hold the information
>> related to zPCI Function group.
>>
>> This allows us to be ready to support multiple groups and to retrieve
>> the group information from the host.
>>
>> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
>> ---
>>  hw/s390x/s390-pci-bus.c  | 42 ++++++++++++++++++++++++++++++++++++++++++
>>  hw/s390x/s390-pci-bus.h  | 11 ++++++++++-
>>  hw/s390x/s390-pci-inst.c | 22 +++++++++++++---------
>>  3 files changed, 65 insertions(+), 10 deletions(-)
>>
>> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
>> index 963a41c..e625217 100644
>> --- a/hw/s390x/s390-pci-bus.c
>> +++ b/hw/s390x/s390-pci-bus.c
>> @@ -730,6 +730,46 @@ static void s390_pci_iommu_free(S390pciState *s, PCIBus *bus, int32_t devfn)
>>      object_unref(OBJECT(iommu));
>>  }
>>  
>> +static S390PCIGroup *s390_grp_create(int ug)
>> +{
>> +    S390PCIGroup *grp;
>> +    S390pciState *s = s390_get_phb();
>> +
>> +    grp = g_new0(S390PCIGroup, 1);
>> +    grp->ug = ug;
>> +    QTAILQ_INSERT_TAIL(&s->zpci_grps, grp, link);
>> +    return grp;
>> +}
> 
> Maybe an ignorant question, but shouldn't there also be some kind of
> clean up function that also frees the memory again, e.g. during a
> machine reset? Or are these groups supposed to survive a machine reset?

Hmm..  Well, I think it is in line with the way the devices themselves
are handled during reset (they are not removed during a reset unless
there was a pending unplug and their info persists).  But you have a
point in that it seems sketchy to leave the group information around,
particularly in cases where the last device associated with the group
has been unplugged.

So yes, I think there is some work to be done here.  I need to
investigate whether a precautionary wiping of the list (minus the
default group) at machine reset is really good enough or whether we need
to remove group info sooner (device unplug).

Thanks,
Matt


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-09-09 16:25 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-07  0:16 [Qemu-devel] [PATCH v3 0/5] Retrieving zPCI specific info from QEMU Matthew Rosato
2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 1/5] vfio: vfio_iommu_type1: linux header place holder Matthew Rosato
2019-09-08 13:08   ` Michael S. Tsirkin
2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 2/5] s390: PCI: Creation a header dedicated to PCI CLP Matthew Rosato
2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 3/5] s390: vfio_pci: Use a PCI Group structure Matthew Rosato
2019-09-09  5:18   ` [Qemu-devel] [qemu-s390x] " Thomas Huth
2019-09-09 16:21     ` Matthew Rosato
2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 4/5] s390: vfio_pci: Use a PCI Function structure Matthew Rosato
2019-09-07  0:16 ` [Qemu-devel] [PATCH v3 5/5] s390: vfio_pci: Get zPCI function info from host Matthew Rosato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).