All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU
@ 2019-05-10 14:38 Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder Pierre Morel
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Pierre Morel @ 2019-05-10 14:38 UTC (permalink / raw)
  To: cohuck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

This patch implement the QEMU part to retrieve ZPCI specific
information from the host.
The Linux part has been posted on a separate patch on the LKML.

Subject: [PATCH 0/4] Retrieving zPCI specific info with VFIO
Message-Id: <1557476555-20256-1-git-send-email-pmorel@linux.ibm.com>


We use the PCI VFIO interface to retrieve ZPCI specific information
from the host.

The Linux patch enhance the VFIO_IOMMU_GET_INFO ioctl with
a new Z specific capability and we use it to access the
information stored in the zPCI function.

The retrieval is done only once per function and function group
during the plugging of the device.
The guest's requests are filled with shadow values we keep for
the duration of the device remains plugged.

Still there are some values we need to virtualize, like
the UID and FID of the zPCI function, and we currently
only allow the refresh bit for the zPCI group flags.

Note that we export the CLP specific definitions in a dedicated
file for clarity.


Pierre Morel (5):
  vfio: vfio_iommu_type1: linux header place holder
  s390: PCI: Creation a header dedicated to PCI CLP
  s390: vfio_pci: Use a PCI Group structure
  s390: vfio_pci: Use a PCI Function structure
  s390: vfio_pci: Get zPCI function info from host

 hw/s390x/s390-pci-bus.c    | 142 ++++++++++++++++++++++++++++--
 hw/s390x/s390-pci-bus.h    |  13 ++-
 hw/s390x/s390-pci-clp.h    | 211 +++++++++++++++++++++++++++++++++++++++++++++
 hw/s390x/s390-pci-inst.c   |  28 +++---
 hw/s390x/s390-pci-inst.h   | 196 -----------------------------------------
 linux-headers/linux/vfio.h |  16 +++-
 6 files changed, 386 insertions(+), 220 deletions(-)
 create mode 100644 hw/s390x/s390-pci-clp.h

-- 
2.7.4



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder
  2019-05-10 14:38 [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU Pierre Morel
@ 2019-05-10 14:38 ` Pierre Morel
  2019-05-12 18:22   ` Michael S. Tsirkin
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 2/5] s390: PCI: Creation a header dedicated to PCI CLP Pierre Morel
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Pierre Morel @ 2019-05-10 14:38 UTC (permalink / raw)
  To: cohuck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

This should be copied from Linux kernel UAPI includes.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 linux-headers/linux/vfio.h | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
index 12a7b1d..eaecaef 100644
--- a/linux-headers/linux/vfio.h
+++ b/linux-headers/linux/vfio.h
@@ -9,8 +9,8 @@
  * it under the terms of the GNU General Public License version 2 as
  * published by the Free Software Foundation.
  */
-#ifndef VFIO_H
-#define VFIO_H
+#ifndef _UAPIVFIO_H
+#define _UAPIVFIO_H
 
 #include <linux/types.h>
 #include <linux/ioctl.h>
@@ -711,6 +711,16 @@ struct vfio_iommu_type1_info {
 	__u32	flags;
 #define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
 	__u64	iova_pgsizes;		/* Bitmap of supported page sizes */
+#define VFIO_IOMMU_INFO_CAPABILITIES (1 << 1)  /* support capabilities info */
+	__u64   cap_offset;     /* Offset within info struct of first cap */
+};
+
+#define VFIO_IOMMU_INFO_CAP_QFN		1
+#define VFIO_IOMMU_INFO_CAP_QGRP	2
+
+struct vfio_iommu_type1_info_block {
+	struct vfio_info_cap_header header;
+	__u32 data[];
 };
 
 #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
@@ -910,4 +920,4 @@ struct vfio_iommu_spapr_tce_remove {
 
 /* ***************************************************************** */
 
-#endif /* VFIO_H */
+#endif /* _UAPIVFIO_H */
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 2/5] s390: PCI: Creation a header dedicated to PCI CLP
  2019-05-10 14:38 [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder Pierre Morel
@ 2019-05-10 14:38 ` Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure Pierre Morel
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Pierre Morel @ 2019-05-10 14:38 UTC (permalink / raw)
  To: cohuck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

To have a clean separation between s390-pci-bus.h
and s390-pci-inst.h headers we export the PCI CLP
instructions in a dedicated header.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.h  |   1 +
 hw/s390x/s390-pci-clp.h  | 211 +++++++++++++++++++++++++++++++++++++++++++++++
 hw/s390x/s390-pci-inst.h | 196 -------------------------------------------
 3 files changed, 212 insertions(+), 196 deletions(-)
 create mode 100644 hw/s390x/s390-pci-clp.h

diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index 550f3cc..a5d2049 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -19,6 +19,7 @@
 #include "hw/s390x/sclp.h"
 #include "hw/s390x/s390_flic.h"
 #include "hw/s390x/css.h"
+#include "s390-pci-clp.h"
 
 #define TYPE_S390_PCI_HOST_BRIDGE "s390-pcihost"
 #define TYPE_S390_PCI_BUS "s390-pcibus"
diff --git a/hw/s390x/s390-pci-clp.h b/hw/s390x/s390-pci-clp.h
new file mode 100644
index 0000000..e442307
--- /dev/null
+++ b/hw/s390x/s390-pci-clp.h
@@ -0,0 +1,211 @@
+/*
+ * s390 CLPinstruction definitions
+ *
+ * Copyright 2019 IBM Corp.
+ * Author(s): Pierre Morel <pmorel@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#ifndef HW_S390_PCI_CLP
+#define HW_S390_PCI_CLP
+
+/* CLP common request & response block size */
+#define CLP_BLK_SIZE 4096
+#define PCI_BAR_COUNT 6
+#define PCI_MAX_FUNCTIONS 4096
+
+typedef struct ClpReqHdr {
+    uint16_t len;
+    uint16_t cmd;
+} QEMU_PACKED ClpReqHdr;
+
+typedef struct ClpRspHdr {
+    uint16_t len;
+    uint16_t rsp;
+} QEMU_PACKED ClpRspHdr;
+
+/* CLP Response Codes */
+#define CLP_RC_OK         0x0010  /* Command request successfully */
+#define CLP_RC_CMD        0x0020  /* Command code not recognized */
+#define CLP_RC_PERM       0x0030  /* Command not authorized */
+#define CLP_RC_FMT        0x0040  /* Invalid command request format */
+#define CLP_RC_LEN        0x0050  /* Invalid command request length */
+#define CLP_RC_8K         0x0060  /* Command requires 8K LPCB */
+#define CLP_RC_RESNOT0    0x0070  /* Reserved field not zero */
+#define CLP_RC_NODATA     0x0080  /* No data available */
+#define CLP_RC_FC_UNKNOWN 0x0100  /* Function code not recognized */
+
+/*
+ * Call Logical Processor - Command Codes
+ */
+#define CLP_LIST_PCI            0x0002
+#define CLP_QUERY_PCI_FN        0x0003
+#define CLP_QUERY_PCI_FNGRP     0x0004
+#define CLP_SET_PCI_FN          0x0005
+
+/* PCI function handle list entry */
+typedef struct ClpFhListEntry {
+    uint16_t device_id;
+    uint16_t vendor_id;
+#define CLP_FHLIST_MASK_CONFIG 0x80000000
+    uint32_t config;
+    uint32_t fid;
+    uint32_t fh;
+} QEMU_PACKED ClpFhListEntry;
+
+#define CLP_RC_SETPCIFN_FH      0x0101 /* Invalid PCI fn handle */
+#define CLP_RC_SETPCIFN_FHOP    0x0102 /* Fn handle not valid for op */
+#define CLP_RC_SETPCIFN_DMAAS   0x0103 /* Invalid DMA addr space */
+#define CLP_RC_SETPCIFN_RES     0x0104 /* Insufficient resources */
+#define CLP_RC_SETPCIFN_ALRDY   0x0105 /* Fn already in requested state */
+#define CLP_RC_SETPCIFN_ERR     0x0106 /* Fn in permanent error state */
+#define CLP_RC_SETPCIFN_RECPND  0x0107 /* Error recovery pending */
+#define CLP_RC_SETPCIFN_BUSY    0x0108 /* Fn busy */
+#define CLP_RC_LISTPCI_BADRT    0x010a /* Resume token not recognized */
+#define CLP_RC_QUERYPCIFG_PFGID 0x010b /* Unrecognized PFGID */
+
+/* request or response block header length */
+#define LIST_PCI_HDR_LEN 32
+
+/* Number of function handles fitting in response block */
+#define CLP_FH_LIST_NR_ENTRIES \
+    ((CLP_BLK_SIZE - 2 * LIST_PCI_HDR_LEN) \
+        / sizeof(ClpFhListEntry))
+
+#define CLP_SET_ENABLE_PCI_FN  0 /* Yes, 0 enables it */
+#define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */
+
+#define CLP_UTIL_STR_LEN 64
+
+#define CLP_MASK_FMT 0xf0000000
+
+/* List PCI functions request */
+typedef struct ClpReqListPci {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint64_t resume_token;
+    uint64_t reserved2;
+} QEMU_PACKED ClpReqListPci;
+
+/* List PCI functions response */
+typedef struct ClpRspListPci {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint64_t resume_token;
+    uint32_t mdd;
+    uint16_t max_fn;
+    uint8_t flags;
+    uint8_t entry_size;
+    ClpFhListEntry fh_list[CLP_FH_LIST_NR_ENTRIES];
+} QEMU_PACKED ClpRspListPci;
+
+/* Query PCI function request */
+typedef struct ClpReqQueryPci {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint32_t fh; /* function handle */
+    uint32_t reserved2;
+    uint64_t reserved3;
+} QEMU_PACKED ClpReqQueryPci;
+
+/* Query PCI function response */
+typedef struct ClpRspQueryPci {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint16_t vfn; /* virtual fn number */
+#define CLP_RSP_QPCI_MASK_UTIL  0x100
+#define CLP_RSP_QPCI_MASK_PFGID 0xff
+    uint16_t ug;
+    uint32_t fid; /* pci function id */
+    uint8_t bar_size[PCI_BAR_COUNT];
+    uint16_t pchid;
+    uint32_t bar[PCI_BAR_COUNT];
+    uint64_t reserved2;
+    uint64_t sdma; /* start dma as */
+    uint64_t edma; /* end dma as */
+    uint32_t reserved3[11];
+    uint32_t uid;
+    uint8_t util_str[CLP_UTIL_STR_LEN]; /* utility string */
+} QEMU_PACKED ClpRspQueryPci;
+
+/* Query PCI function group request */
+typedef struct ClpReqQueryPciGrp {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+#define CLP_REQ_QPCIG_MASK_PFGID 0xff
+    uint32_t g;
+    uint32_t reserved2;
+    uint64_t reserved3;
+} QEMU_PACKED ClpReqQueryPciGrp;
+
+/* Query PCI function group response */
+typedef struct ClpRspQueryPciGrp {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+#define CLP_RSP_QPCIG_MASK_NOI 0xfff
+    uint16_t i;
+    uint8_t version;
+#define CLP_RSP_QPCIG_MASK_FRAME   0x2
+#define CLP_RSP_QPCIG_MASK_REFRESH 0x1
+    uint8_t fr;
+    uint16_t maxstbl;
+    uint16_t mui;
+    uint64_t reserved3;
+    uint64_t dasm; /* dma address space mask */
+    uint64_t msia; /* MSI address */
+    uint64_t reserved4;
+    uint64_t reserved5;
+} QEMU_PACKED ClpRspQueryPciGrp;
+
+/* Set PCI function request */
+typedef struct ClpReqSetPci {
+    ClpReqHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint32_t fh; /* function handle */
+    uint16_t reserved2;
+    uint8_t oc; /* operation controls */
+    uint8_t ndas; /* number of dma spaces */
+    uint64_t reserved3;
+} QEMU_PACKED ClpReqSetPci;
+
+/* Set PCI function response */
+typedef struct ClpRspSetPci {
+    ClpRspHdr hdr;
+    uint32_t fmt;
+    uint64_t reserved1;
+    uint32_t fh; /* function handle */
+    uint32_t reserved3;
+    uint64_t reserved4;
+} QEMU_PACKED ClpRspSetPci;
+
+typedef struct ClpReqRspListPci {
+    ClpReqListPci request;
+    ClpRspListPci response;
+} QEMU_PACKED ClpReqRspListPci;
+
+typedef struct ClpReqRspSetPci {
+    ClpReqSetPci request;
+    ClpRspSetPci response;
+} QEMU_PACKED ClpReqRspSetPci;
+
+typedef struct ClpReqRspQueryPci {
+    ClpReqQueryPci request;
+    ClpRspQueryPci response;
+} QEMU_PACKED ClpReqRspQueryPci;
+
+typedef struct ClpReqRspQueryPciGrp {
+    ClpReqQueryPciGrp request;
+    ClpRspQueryPciGrp response;
+} QEMU_PACKED ClpReqRspQueryPciGrp;
+
+#endif
diff --git a/hw/s390x/s390-pci-inst.h b/hw/s390x/s390-pci-inst.h
index fa3bf8b..6c4273a 100644
--- a/hw/s390x/s390-pci-inst.h
+++ b/hw/s390x/s390-pci-inst.h
@@ -17,202 +17,6 @@
 #include "s390-pci-bus.h"
 #include "sysemu/dma.h"
 
-/* CLP common request & response block size */
-#define CLP_BLK_SIZE 4096
-#define PCI_BAR_COUNT 6
-#define PCI_MAX_FUNCTIONS 4096
-
-typedef struct ClpReqHdr {
-    uint16_t len;
-    uint16_t cmd;
-} QEMU_PACKED ClpReqHdr;
-
-typedef struct ClpRspHdr {
-    uint16_t len;
-    uint16_t rsp;
-} QEMU_PACKED ClpRspHdr;
-
-/* CLP Response Codes */
-#define CLP_RC_OK         0x0010  /* Command request successfully */
-#define CLP_RC_CMD        0x0020  /* Command code not recognized */
-#define CLP_RC_PERM       0x0030  /* Command not authorized */
-#define CLP_RC_FMT        0x0040  /* Invalid command request format */
-#define CLP_RC_LEN        0x0050  /* Invalid command request length */
-#define CLP_RC_8K         0x0060  /* Command requires 8K LPCB */
-#define CLP_RC_RESNOT0    0x0070  /* Reserved field not zero */
-#define CLP_RC_NODATA     0x0080  /* No data available */
-#define CLP_RC_FC_UNKNOWN 0x0100  /* Function code not recognized */
-
-/*
- * Call Logical Processor - Command Codes
- */
-#define CLP_LIST_PCI            0x0002
-#define CLP_QUERY_PCI_FN        0x0003
-#define CLP_QUERY_PCI_FNGRP     0x0004
-#define CLP_SET_PCI_FN          0x0005
-
-/* PCI function handle list entry */
-typedef struct ClpFhListEntry {
-    uint16_t device_id;
-    uint16_t vendor_id;
-#define CLP_FHLIST_MASK_CONFIG 0x80000000
-    uint32_t config;
-    uint32_t fid;
-    uint32_t fh;
-} QEMU_PACKED ClpFhListEntry;
-
-#define CLP_RC_SETPCIFN_FH      0x0101 /* Invalid PCI fn handle */
-#define CLP_RC_SETPCIFN_FHOP    0x0102 /* Fn handle not valid for op */
-#define CLP_RC_SETPCIFN_DMAAS   0x0103 /* Invalid DMA addr space */
-#define CLP_RC_SETPCIFN_RES     0x0104 /* Insufficient resources */
-#define CLP_RC_SETPCIFN_ALRDY   0x0105 /* Fn already in requested state */
-#define CLP_RC_SETPCIFN_ERR     0x0106 /* Fn in permanent error state */
-#define CLP_RC_SETPCIFN_RECPND  0x0107 /* Error recovery pending */
-#define CLP_RC_SETPCIFN_BUSY    0x0108 /* Fn busy */
-#define CLP_RC_LISTPCI_BADRT    0x010a /* Resume token not recognized */
-#define CLP_RC_QUERYPCIFG_PFGID 0x010b /* Unrecognized PFGID */
-
-/* request or response block header length */
-#define LIST_PCI_HDR_LEN 32
-
-/* Number of function handles fitting in response block */
-#define CLP_FH_LIST_NR_ENTRIES \
-    ((CLP_BLK_SIZE - 2 * LIST_PCI_HDR_LEN) \
-        / sizeof(ClpFhListEntry))
-
-#define CLP_SET_ENABLE_PCI_FN  0 /* Yes, 0 enables it */
-#define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */
-
-#define CLP_UTIL_STR_LEN 64
-
-#define CLP_MASK_FMT 0xf0000000
-
-/* List PCI functions request */
-typedef struct ClpReqListPci {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint64_t resume_token;
-    uint64_t reserved2;
-} QEMU_PACKED ClpReqListPci;
-
-/* List PCI functions response */
-typedef struct ClpRspListPci {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint64_t resume_token;
-    uint32_t mdd;
-    uint16_t max_fn;
-    uint8_t flags;
-    uint8_t entry_size;
-    ClpFhListEntry fh_list[CLP_FH_LIST_NR_ENTRIES];
-} QEMU_PACKED ClpRspListPci;
-
-/* Query PCI function request */
-typedef struct ClpReqQueryPci {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint32_t fh; /* function handle */
-    uint32_t reserved2;
-    uint64_t reserved3;
-} QEMU_PACKED ClpReqQueryPci;
-
-/* Query PCI function response */
-typedef struct ClpRspQueryPci {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint16_t vfn; /* virtual fn number */
-#define CLP_RSP_QPCI_MASK_UTIL  0x100
-#define CLP_RSP_QPCI_MASK_PFGID 0xff
-    uint16_t ug;
-    uint32_t fid; /* pci function id */
-    uint8_t bar_size[PCI_BAR_COUNT];
-    uint16_t pchid;
-    uint32_t bar[PCI_BAR_COUNT];
-    uint64_t reserved2;
-    uint64_t sdma; /* start dma as */
-    uint64_t edma; /* end dma as */
-    uint32_t reserved3[11];
-    uint32_t uid;
-    uint8_t util_str[CLP_UTIL_STR_LEN]; /* utility string */
-} QEMU_PACKED ClpRspQueryPci;
-
-/* Query PCI function group request */
-typedef struct ClpReqQueryPciGrp {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-#define CLP_REQ_QPCIG_MASK_PFGID 0xff
-    uint32_t g;
-    uint32_t reserved2;
-    uint64_t reserved3;
-} QEMU_PACKED ClpReqQueryPciGrp;
-
-/* Query PCI function group response */
-typedef struct ClpRspQueryPciGrp {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-#define CLP_RSP_QPCIG_MASK_NOI 0xfff
-    uint16_t i;
-    uint8_t version;
-#define CLP_RSP_QPCIG_MASK_FRAME   0x2
-#define CLP_RSP_QPCIG_MASK_REFRESH 0x1
-    uint8_t fr;
-    uint16_t maxstbl;
-    uint16_t mui;
-    uint64_t reserved3;
-    uint64_t dasm; /* dma address space mask */
-    uint64_t msia; /* MSI address */
-    uint64_t reserved4;
-    uint64_t reserved5;
-} QEMU_PACKED ClpRspQueryPciGrp;
-
-/* Set PCI function request */
-typedef struct ClpReqSetPci {
-    ClpReqHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint32_t fh; /* function handle */
-    uint16_t reserved2;
-    uint8_t oc; /* operation controls */
-    uint8_t ndas; /* number of dma spaces */
-    uint64_t reserved3;
-} QEMU_PACKED ClpReqSetPci;
-
-/* Set PCI function response */
-typedef struct ClpRspSetPci {
-    ClpRspHdr hdr;
-    uint32_t fmt;
-    uint64_t reserved1;
-    uint32_t fh; /* function handle */
-    uint32_t reserved3;
-    uint64_t reserved4;
-} QEMU_PACKED ClpRspSetPci;
-
-typedef struct ClpReqRspListPci {
-    ClpReqListPci request;
-    ClpRspListPci response;
-} QEMU_PACKED ClpReqRspListPci;
-
-typedef struct ClpReqRspSetPci {
-    ClpReqSetPci request;
-    ClpRspSetPci response;
-} QEMU_PACKED ClpReqRspSetPci;
-
-typedef struct ClpReqRspQueryPci {
-    ClpReqQueryPci request;
-    ClpRspQueryPci response;
-} QEMU_PACKED ClpReqRspQueryPci;
-
-typedef struct ClpReqRspQueryPciGrp {
-    ClpReqQueryPciGrp request;
-    ClpRspQueryPciGrp response;
-} QEMU_PACKED ClpReqRspQueryPciGrp;
-
 /* Load/Store status codes */
 #define ZPCI_PCI_ST_FUNC_NOT_ENABLED        4
 #define ZPCI_PCI_ST_FUNC_IN_ERR             8
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure
  2019-05-10 14:38 [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 2/5] s390: PCI: Creation a header dedicated to PCI CLP Pierre Morel
@ 2019-05-10 14:38 ` Pierre Morel
  2019-05-14 11:49   ` Cornelia Huck
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 4/5] s390: vfio_pci: Use a PCI Function structure Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 5/5] s390: vfio_pci: Get zPCI function info from host Pierre Morel
  4 siblings, 1 reply; 10+ messages in thread
From: Pierre Morel @ 2019-05-10 14:38 UTC (permalink / raw)
  To: cohuck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

We use a S390PCIGroup structure to hold the information
related to zPCI Function group.

This allows us to be ready to support multiple groups and to retrieve
the group information from the host.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.c  | 42 ++++++++++++++++++++++++++++++++++++++++++
 hw/s390x/s390-pci-bus.h  | 11 ++++++++++-
 hw/s390x/s390-pci-inst.c | 22 +++++++++++++---------
 3 files changed, 65 insertions(+), 10 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 2d0a28d..175ea8c 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -729,6 +729,46 @@ static void s390_pci_iommu_free(S390pciState *s, PCIBus *bus, int32_t devfn)
     object_unref(OBJECT(iommu));
 }
 
+static S390PCIGroup *s390_grp_create(int ug)
+{
+    S390PCIGroup *grp;
+    S390pciState *s = s390_get_phb();
+
+    grp = g_new0(S390PCIGroup, 1);
+    grp->ug = ug;
+    QTAILQ_INSERT_TAIL(&s->zpci_grps, grp, link);
+    return grp;
+}
+
+S390PCIGroup *s390_grp_find(int ug)
+{
+    S390PCIGroup *grp;
+    S390pciState *s = s390_get_phb();
+
+    QTAILQ_FOREACH(grp, &s->zpci_grps, link) {
+        if ((grp->ug & CLP_REQ_QPCIG_MASK_PFGID) == ug) {
+            return grp;
+        }
+    }
+    return NULL;
+}
+
+static void s390_pci_init_default_group(void)
+{
+    S390PCIGroup *grp;
+    ClpRspQueryPciGrp *resgrp;
+
+    grp = s390_grp_create(ZPCI_DEFAULT_FN_GRP);
+    resgrp = &grp->zpci_grp;
+    resgrp->fr = 1;
+    stq_p(&resgrp->dasm, 0);
+    stq_p(&resgrp->msia, ZPCI_MSI_ADDR);
+    stw_p(&resgrp->mui, DEFAULT_MUI);
+    stw_p(&resgrp->i, 128);
+    stw_p(&resgrp->maxstbl, 128);
+    resgrp->version = 0;
+}
+
 static void s390_pcihost_realize(DeviceState *dev, Error **errp)
 {
     PCIBus *b;
@@ -765,7 +805,9 @@ static void s390_pcihost_realize(DeviceState *dev, Error **errp)
     s->bus_no = 0;
     QTAILQ_INIT(&s->pending_sei);
     QTAILQ_INIT(&s->zpci_devs);
+    QTAILQ_INIT(&s->zpci_grps);
 
+    s390_pci_init_default_group();
     css_register_io_adapters(CSS_IO_ADAPTER_PCI, true, false,
                              S390_ADAPTER_SUPPRESSIBLE, &local_err);
     error_propagate(errp, local_err);
diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index a5d2049..e95a797 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -312,6 +312,14 @@ typedef struct ZpciFmb {
 } ZpciFmb;
 QEMU_BUILD_BUG_MSG(offsetof(ZpciFmb, fmt0) != 48, "padding in ZpciFmb");
 
+#define ZPCI_DEFAULT_FN_GRP 0x20
+typedef struct S390PCIGroup {
+    ClpRspQueryPciGrp zpci_grp;
+    int ug;
+    QTAILQ_ENTRY(S390PCIGroup) link;
+} S390PCIGroup;
+S390PCIGroup *s390_grp_find(int ug);
+
 struct S390PCIBusDevice {
     DeviceState qdev;
     PCIDevice *pdev;
@@ -327,8 +335,8 @@ struct S390PCIBusDevice {
     QEMUTimer *fmb_timer;
     uint8_t isc;
     uint16_t noi;
-    uint16_t maxstbl;
     uint8_t sum;
+    S390PCIGroup *pci_grp;
     S390MsixInfo msix;
     AdapterRoutes routes;
     S390PCIIOMMU *iommu;
@@ -353,6 +361,7 @@ typedef struct S390pciState {
     GHashTable *zpci_table;
     QTAILQ_HEAD(, SeiContainer) pending_sei;
     QTAILQ_HEAD(, S390PCIBusDevice) zpci_devs;
+    QTAILQ_HEAD(, S390PCIGroup) zpci_grps;
 } S390pciState;
 
 S390pciState *s390_get_phb(void);
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index be28962..8147847 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -284,21 +284,25 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
         stq_p(&resquery->edma, ZPCI_EDMA_ADDR);
         stl_p(&resquery->fid, pbdev->fid);
         stw_p(&resquery->pchid, 0);
-        stw_p(&resquery->ug, 1);
+        stw_p(&resquery->ug, ZPCI_DEFAULT_FN_GRP);
         stl_p(&resquery->uid, pbdev->uid);
         stw_p(&resquery->hdr.rsp, CLP_RC_OK);
         break;
     }
     case CLP_QUERY_PCI_FNGRP: {
         ClpRspQueryPciGrp *resgrp = (ClpRspQueryPciGrp *)resh;
-        resgrp->fr = 1;
-        stq_p(&resgrp->dasm, 0);
-        stq_p(&resgrp->msia, ZPCI_MSI_ADDR);
-        stw_p(&resgrp->mui, DEFAULT_MUI);
-        stw_p(&resgrp->i, 128);
-        stw_p(&resgrp->maxstbl, 128);
-        resgrp->version = 0;
 
+        ClpReqQueryPciGrp *reqgrp = (ClpReqQueryPciGrp *)reqh;
+        S390PCIGroup *grp;
+
+        grp = s390_grp_find(reqgrp->g);
+        if (!grp) {
+            /* We do not allow access to unknown groups */
+            /* The group must have been obtained with a vfio device */
+            stw_p(&resgrp->hdr.rsp, CLP_RC_QUERYPCIFG_PFGID);
+            goto out;
+        }
+        memcpy(resgrp, &grp->zpci_grp, sizeof(ClpRspQueryPciGrp));
         stw_p(&resgrp->hdr.rsp, CLP_RC_OK);
         break;
     }
@@ -752,7 +756,7 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
     }
     /* Length must be greater than 8, a multiple of 8 */
     /* and not greater than maxstbl */
-    if ((len <= 8) || (len % 8) || (len > pbdev->maxstbl)) {
+    if ((len <= 8) || (len % 8) || (len > pbdev->pci_grp->zpci_grp.maxstbl)) {
         goto specification_error;
     }
     /* Do not cross a 4K-byte boundary */
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 4/5] s390: vfio_pci: Use a PCI Function structure
  2019-05-10 14:38 [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU Pierre Morel
                   ` (2 preceding siblings ...)
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure Pierre Morel
@ 2019-05-10 14:38 ` Pierre Morel
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 5/5] s390: vfio_pci: Get zPCI function info from host Pierre Morel
  4 siblings, 0 replies; 10+ messages in thread
From: Pierre Morel @ 2019-05-10 14:38 UTC (permalink / raw)
  To: cohuck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

 We use a ClpRspQueryPci structure to hold the information
related to zPCI Function.

This allows us to be ready to support different zPCI functions
and to retrieve the zPCI function information from the host.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.c  | 22 +++++++++++++++++-----
 hw/s390x/s390-pci-bus.h  |  1 +
 hw/s390x/s390-pci-inst.c |  8 ++------
 3 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 175ea8c..6df80aa 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -769,6 +769,17 @@ static void s390_pci_init_default_group(void)
     resgrp->version = 0;
 }
 
+static void set_pbdev_info(S390PCIBusDevice *pbdev)
+{
+    pbdev->zpci_fn.sdma = ZPCI_SDMA_ADDR;
+    pbdev->zpci_fn.edma = ZPCI_EDMA_ADDR;
+    pbdev->zpci_fn.pchid = 0;
+    pbdev->zpci_fn.ug = ZPCI_DEFAULT_FN_GRP;
+    pbdev->zpci_fn.fid = pbdev->fid;
+    pbdev->zpci_fn.uid = pbdev->uid;
+    pbdev->pci_grp = s390_grp_find(ZPCI_DEFAULT_FN_GRP);
+}
+
 static void s390_pcihost_realize(DeviceState *dev, Error **errp)
 {
     PCIBus *b;
@@ -987,17 +998,18 @@ static void s390_pcihost_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
             }
         }
 
+        pbdev->pdev = pdev;
+        pbdev->iommu = s390_pci_get_iommu(s, pci_get_bus(pdev), pdev->devfn);
+        pbdev->iommu->pbdev = pbdev;
+        pbdev->state = ZPCI_FS_DISABLED;
+        set_pbdev_info(pbdev);
+
         if (object_dynamic_cast(OBJECT(dev), "vfio-pci")) {
             pbdev->fh |= FH_SHM_VFIO;
         } else {
             pbdev->fh |= FH_SHM_EMUL;
         }
 
-        pbdev->pdev = pdev;
-        pbdev->iommu = s390_pci_get_iommu(s, pci_get_bus(pdev), pdev->devfn);
-        pbdev->iommu->pbdev = pbdev;
-        pbdev->state = ZPCI_FS_DISABLED;
-
         if (s390_pci_msix_init(pbdev)) {
             error_setg(errp, "MSI-X support is mandatory "
                        "in the S390 architecture");
diff --git a/hw/s390x/s390-pci-bus.h b/hw/s390x/s390-pci-bus.h
index e95a797..8c969d1 100644
--- a/hw/s390x/s390-pci-bus.h
+++ b/hw/s390x/s390-pci-bus.h
@@ -337,6 +337,7 @@ struct S390PCIBusDevice {
     uint16_t noi;
     uint8_t sum;
     S390PCIGroup *pci_grp;
+    ClpRspQueryPci zpci_fn;
     S390MsixInfo msix;
     AdapterRoutes routes;
     S390PCIIOMMU *iommu;
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 8147847..68ca240 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -267,6 +267,8 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
             goto out;
         }
 
+        memcpy(resquery, &pbdev->zpci_fn, sizeof(*resquery));
+
         for (i = 0; i < PCI_BAR_COUNT; i++) {
             uint32_t data = pci_get_long(pbdev->pdev->config +
                 PCI_BASE_ADDRESS_0 + (i * 4));
@@ -280,12 +282,6 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
                     resquery->bar_size[i]);
         }
 
-        stq_p(&resquery->sdma, ZPCI_SDMA_ADDR);
-        stq_p(&resquery->edma, ZPCI_EDMA_ADDR);
-        stl_p(&resquery->fid, pbdev->fid);
-        stw_p(&resquery->pchid, 0);
-        stw_p(&resquery->ug, ZPCI_DEFAULT_FN_GRP);
-        stl_p(&resquery->uid, pbdev->uid);
         stw_p(&resquery->hdr.rsp, CLP_RC_OK);
         break;
     }
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 5/5] s390: vfio_pci: Get zPCI function info from host
  2019-05-10 14:38 [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU Pierre Morel
                   ` (3 preceding siblings ...)
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 4/5] s390: vfio_pci: Use a PCI Function structure Pierre Morel
@ 2019-05-10 14:38 ` Pierre Morel
  4 siblings, 0 replies; 10+ messages in thread
From: Pierre Morel @ 2019-05-10 14:38 UTC (permalink / raw)
  To: cohuck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

The VFIO_IOMMU_INFO_CAPABILITIES flag allows to retrieve IOMMU specific
informations from the iommu associated with the zPCI VFIO device.

When we retrieve the host device information, we take care to
- use the virtual UID and FID
- Disable all the IOMMU flags we do not support yet.
  Just keeping the refresh bit.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 hw/s390x/s390-pci-bus.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 78 insertions(+), 2 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 6df80aa..3b7539c 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -23,6 +23,9 @@
 #include "hw/pci/msi.h"
 #include "qemu/error-report.h"
 
+#include "hw/vfio/pci.h"
+#include <sys/ioctl.h>
+
 #ifndef DEBUG_S390PCI_BUS
 #define DEBUG_S390PCI_BUS  0
 #endif
@@ -780,6 +783,75 @@ static void set_pbdev_info(S390PCIBusDevice *pbdev)
     pbdev->pci_grp = s390_grp_find(ZPCI_DEFAULT_FN_GRP);
 }
 
+static int s390_fill_zpci(S390PCIBusDevice *pbdev,
+                          struct vfio_info_cap_header *cap)
+{
+    ClpRspQueryPci *rsp_fn;
+    ClpRspQueryPciGrp *rsp_grp;
+    S390PCIGroup *pci_grp;
+
+    /* We expect the function response first */
+    if (cap->id != VFIO_IOMMU_INFO_CAP_QFN) {
+        return -ENODEV;
+    }
+    rsp_fn = (struct ClpRspQueryPci *)(cap + 1);
+    memcpy(&pbdev->zpci_fn, rsp_fn, sizeof(*rsp_fn));
+    /* We use the virtualized FID and UID */
+    pbdev->zpci_fn.fid = pbdev->fid;
+    pbdev->zpci_fn.uid = pbdev->uid;
+
+    cap = (struct vfio_info_cap_header *)((char *)cap + cap->next);
+    if (cap->id != VFIO_IOMMU_INFO_CAP_QGRP) {
+        return -ENODEV;
+    }
+    pci_grp = s390_grp_find(rsp_fn->ug);
+    if (!pci_grp) {
+        pci_grp = s390_grp_create(rsp_fn->ug);
+    }
+
+    rsp_grp = (struct ClpRspQueryPciGrp *)(cap + 1);
+
+    memcpy(&pci_grp->zpci_grp, rsp_grp, sizeof(*rsp_grp));
+    /* We only support the refresh bit */
+    pci_grp->zpci_grp.fr = 1;
+
+    pbdev->pci_grp = pci_grp;
+
+    return 0;
+}
+
+static int get_pbdev_info(S390PCIBusDevice *pbdev)
+{
+    VFIOPCIDevice *vfio_pci;
+    int fd;
+    int ret;
+    struct vfio_iommu_type1_info *info;
+    int size;
+
+    vfio_pci = container_of(pbdev->pdev, VFIOPCIDevice, pdev);
+    fd = vfio_pci->vbasedev.group->container->fd;
+    info = g_malloc0(sizeof(*info));
+    info->flags = VFIO_IOMMU_INFO_CAPABILITIES;
+    info->argsz = sizeof(*info);
+    ret = ioctl(fd, VFIO_IOMMU_GET_INFO, info);
+    if (ret) {
+        return ret;
+    }
+    size = info->argsz;
+    info = g_realloc(info, size);
+    info->flags = VFIO_IOMMU_INFO_CAPABILITIES;
+    info->argsz = size;
+    ret = ioctl(fd, VFIO_IOMMU_GET_INFO, info);
+    if (ret) {
+        return ret;
+    }
+    /* Fill zPCI parameters maxstbl, start dma, end dma...*/
+    /* using the caps */
+    ret = s390_fill_zpci(pbdev, (struct vfio_info_cap_header *)(info + 1));
+    g_free(info);
+    return ret;
+}
+
 static void s390_pcihost_realize(DeviceState *dev, Error **errp)
 {
     PCIBus *b;
@@ -852,7 +924,8 @@ static int s390_pci_msix_init(S390PCIBusDevice *pbdev)
     name = g_strdup_printf("msix-s390-%04x", pbdev->uid);
     memory_region_init_io(&pbdev->msix_notify_mr, OBJECT(pbdev),
                           &s390_msi_ctrl_ops, pbdev, name, PAGE_SIZE);
-    memory_region_add_subregion(&pbdev->iommu->mr, ZPCI_MSI_ADDR,
+    memory_region_add_subregion(&pbdev->iommu->mr,
+                                pbdev->pci_grp->zpci_grp.msia,
                                 &pbdev->msix_notify_mr);
     g_free(name);
 
@@ -1002,12 +1075,15 @@ static void s390_pcihost_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
         pbdev->iommu = s390_pci_get_iommu(s, pci_get_bus(pdev), pdev->devfn);
         pbdev->iommu->pbdev = pbdev;
         pbdev->state = ZPCI_FS_DISABLED;
-        set_pbdev_info(pbdev);
 
         if (object_dynamic_cast(OBJECT(dev), "vfio-pci")) {
             pbdev->fh |= FH_SHM_VFIO;
+            if (get_pbdev_info(pbdev) != 0) {
+                set_pbdev_info(pbdev);
+            }
         } else {
             pbdev->fh |= FH_SHM_EMUL;
+            set_pbdev_info(pbdev);
         }
 
         if (s390_pci_msix_init(pbdev)) {
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder Pierre Morel
@ 2019-05-12 18:22   ` Michael S. Tsirkin
  2019-05-16  8:51     ` Pierre Morel
  0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2019-05-12 18:22 UTC (permalink / raw)
  To: Pierre Morel
  Cc: pasic, david, qemu-s390x, cohuck, walling, qemu-devel,
	borntraeger, alex.williamson, pbonzini, rth

On Fri, May 10, 2019 at 04:38:49PM +0200, Pierre Morel wrote:
> This should be copied from Linux kernel UAPI includes.
> 
> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>

pls add a note which linux version did you sync with.

> ---
>  linux-headers/linux/vfio.h | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
> index 12a7b1d..eaecaef 100644
> --- a/linux-headers/linux/vfio.h
> +++ b/linux-headers/linux/vfio.h
> @@ -9,8 +9,8 @@
>   * it under the terms of the GNU General Public License version 2 as
>   * published by the Free Software Foundation.
>   */
> -#ifndef VFIO_H
> -#define VFIO_H
> +#ifndef _UAPIVFIO_H
> +#define _UAPIVFIO_H
>  
>  #include <linux/types.h>
>  #include <linux/ioctl.h>
> @@ -711,6 +711,16 @@ struct vfio_iommu_type1_info {
>  	__u32	flags;
>  #define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
>  	__u64	iova_pgsizes;		/* Bitmap of supported page sizes */
> +#define VFIO_IOMMU_INFO_CAPABILITIES (1 << 1)  /* support capabilities info */
> +	__u64   cap_offset;     /* Offset within info struct of first cap */
> +};
> +
> +#define VFIO_IOMMU_INFO_CAP_QFN		1
> +#define VFIO_IOMMU_INFO_CAP_QGRP	2
> +
> +struct vfio_iommu_type1_info_block {
> +	struct vfio_info_cap_header header;
> +	__u32 data[];
>  };
>  
>  #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
> @@ -910,4 +920,4 @@ struct vfio_iommu_spapr_tce_remove {
>  
>  /* ***************************************************************** */
>  
> -#endif /* VFIO_H */
> +#endif /* _UAPIVFIO_H */
> -- 
> 2.7.4


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure
  2019-05-10 14:38 ` [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure Pierre Morel
@ 2019-05-14 11:49   ` Cornelia Huck
  2019-05-16  8:55     ` Pierre Morel
  0 siblings, 1 reply; 10+ messages in thread
From: Cornelia Huck @ 2019-05-14 11:49 UTC (permalink / raw)
  To: Pierre Morel
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

On Fri, 10 May 2019 16:38:51 +0200
Pierre Morel <pmorel@linux.ibm.com> wrote:

> We use a S390PCIGroup structure to hold the information
> related to zPCI Function group.
> 
> This allows us to be ready to support multiple groups and to retrieve
> the group information from the host.

What if there is no host to retrieve information from?

> 
> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
> ---
>  hw/s390x/s390-pci-bus.c  | 42 ++++++++++++++++++++++++++++++++++++++++++
>  hw/s390x/s390-pci-bus.h  | 11 ++++++++++-
>  hw/s390x/s390-pci-inst.c | 22 +++++++++++++---------
>  3 files changed, 65 insertions(+), 10 deletions(-)

> diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
> index be28962..8147847 100644
> --- a/hw/s390x/s390-pci-inst.c
> +++ b/hw/s390x/s390-pci-inst.c
> @@ -284,21 +284,25 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
>          stq_p(&resquery->edma, ZPCI_EDMA_ADDR);
>          stl_p(&resquery->fid, pbdev->fid);
>          stw_p(&resquery->pchid, 0);
> -        stw_p(&resquery->ug, 1);
> +        stw_p(&resquery->ug, ZPCI_DEFAULT_FN_GRP);
>          stl_p(&resquery->uid, pbdev->uid);
>          stw_p(&resquery->hdr.rsp, CLP_RC_OK);
>          break;
>      }
>      case CLP_QUERY_PCI_FNGRP: {
>          ClpRspQueryPciGrp *resgrp = (ClpRspQueryPciGrp *)resh;
> -        resgrp->fr = 1;
> -        stq_p(&resgrp->dasm, 0);
> -        stq_p(&resgrp->msia, ZPCI_MSI_ADDR);
> -        stw_p(&resgrp->mui, DEFAULT_MUI);
> -        stw_p(&resgrp->i, 128);
> -        stw_p(&resgrp->maxstbl, 128);
> -        resgrp->version = 0;
>  
> +        ClpReqQueryPciGrp *reqgrp = (ClpReqQueryPciGrp *)reqh;
> +        S390PCIGroup *grp;
> +
> +        grp = s390_grp_find(reqgrp->g);
> +        if (!grp) {
> +            /* We do not allow access to unknown groups */
> +            /* The group must have been obtained with a vfio device */

What about non-vfio devices? How does this whole feature work for
emulated devices?

> +            stw_p(&resgrp->hdr.rsp, CLP_RC_QUERYPCIFG_PFGID);
> +            goto out;
> +        }
> +        memcpy(resgrp, &grp->zpci_grp, sizeof(ClpRspQueryPciGrp));
>          stw_p(&resgrp->hdr.rsp, CLP_RC_OK);
>          break;
>      }
> @@ -752,7 +756,7 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
>      }
>      /* Length must be greater than 8, a multiple of 8 */
>      /* and not greater than maxstbl */
> -    if ((len <= 8) || (len % 8) || (len > pbdev->maxstbl)) {
> +    if ((len <= 8) || (len % 8) || (len > pbdev->pci_grp->zpci_grp.maxstbl)) {
>          goto specification_error;
>      }
>      /* Do not cross a 4K-byte boundary */



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder
  2019-05-12 18:22   ` Michael S. Tsirkin
@ 2019-05-16  8:51     ` Pierre Morel
  0 siblings, 0 replies; 10+ messages in thread
From: Pierre Morel @ 2019-05-16  8:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: pasic, david, qemu-s390x, cohuck, walling, qemu-devel,
	borntraeger, alex.williamson, pbonzini, rth

On 12/05/2019 20:22, Michael S. Tsirkin wrote:
> On Fri, May 10, 2019 at 04:38:49PM +0200, Pierre Morel wrote:
>> This should be copied from Linux kernel UAPI includes.
>>
>> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
> 
> pls add a note which linux version did you sync with.

I will, thanks.
Pierre

> 
>> ---
>>   linux-headers/linux/vfio.h | 16 +++++++++++++---
>>   1 file changed, 13 insertions(+), 3 deletions(-)
>>
>> diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
>> index 12a7b1d..eaecaef 100644
>> --- a/linux-headers/linux/vfio.h
>> +++ b/linux-headers/linux/vfio.h
>> @@ -9,8 +9,8 @@
>>    * it under the terms of the GNU General Public License version 2 as
>>    * published by the Free Software Foundation.
>>    */
>> -#ifndef VFIO_H
>> -#define VFIO_H
>> +#ifndef _UAPIVFIO_H
>> +#define _UAPIVFIO_H
>>   
>>   #include <linux/types.h>
>>   #include <linux/ioctl.h>
>> @@ -711,6 +711,16 @@ struct vfio_iommu_type1_info {
>>   	__u32	flags;
>>   #define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
>>   	__u64	iova_pgsizes;		/* Bitmap of supported page sizes */
>> +#define VFIO_IOMMU_INFO_CAPABILITIES (1 << 1)  /* support capabilities info */
>> +	__u64   cap_offset;     /* Offset within info struct of first cap */
>> +};
>> +
>> +#define VFIO_IOMMU_INFO_CAP_QFN		1
>> +#define VFIO_IOMMU_INFO_CAP_QGRP	2
>> +
>> +struct vfio_iommu_type1_info_block {
>> +	struct vfio_info_cap_header header;
>> +	__u32 data[];
>>   };
>>   
>>   #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
>> @@ -910,4 +920,4 @@ struct vfio_iommu_spapr_tce_remove {
>>   
>>   /* ***************************************************************** */
>>   
>> -#endif /* VFIO_H */
>> +#endif /* _UAPIVFIO_H */
>> -- 
>> 2.7.4
> 


-- 
Pierre Morel
Linux/KVM/QEMU in Böblingen - Germany



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure
  2019-05-14 11:49   ` Cornelia Huck
@ 2019-05-16  8:55     ` Pierre Morel
  0 siblings, 0 replies; 10+ messages in thread
From: Pierre Morel @ 2019-05-16  8:55 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: pasic, mst, qemu-s390x, david, walling, qemu-devel, borntraeger,
	alex.williamson, pbonzini, rth

On 14/05/2019 13:49, Cornelia Huck wrote:
> On Fri, 10 May 2019 16:38:51 +0200
> Pierre Morel <pmorel@linux.ibm.com> wrote:
> 
>> We use a S390PCIGroup structure to hold the information
>> related to zPCI Function group.
>>
>> This allows us to be ready to support multiple groups and to retrieve
>> the group information from the host.
> 
> What if there is no host to retrieve information from?

There is a default group for emulate devices.
I will enhance the comment.

Thanks

> 
>>
>> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
>> ---
>>   hw/s390x/s390-pci-bus.c  | 42 ++++++++++++++++++++++++++++++++++++++++++
>>   hw/s390x/s390-pci-bus.h  | 11 ++++++++++-
>>   hw/s390x/s390-pci-inst.c | 22 +++++++++++++---------
>>   3 files changed, 65 insertions(+), 10 deletions(-)
> 
>> diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
>> index be28962..8147847 100644
>> --- a/hw/s390x/s390-pci-inst.c
>> +++ b/hw/s390x/s390-pci-inst.c
>> @@ -284,21 +284,25 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
>>           stq_p(&resquery->edma, ZPCI_EDMA_ADDR);
>>           stl_p(&resquery->fid, pbdev->fid);
>>           stw_p(&resquery->pchid, 0);
>> -        stw_p(&resquery->ug, 1);
>> +        stw_p(&resquery->ug, ZPCI_DEFAULT_FN_GRP);
>>           stl_p(&resquery->uid, pbdev->uid);
>>           stw_p(&resquery->hdr.rsp, CLP_RC_OK);
>>           break;
>>       }
>>       case CLP_QUERY_PCI_FNGRP: {
>>           ClpRspQueryPciGrp *resgrp = (ClpRspQueryPciGrp *)resh;
>> -        resgrp->fr = 1;
>> -        stq_p(&resgrp->dasm, 0);
>> -        stq_p(&resgrp->msia, ZPCI_MSI_ADDR);
>> -        stw_p(&resgrp->mui, DEFAULT_MUI);
>> -        stw_p(&resgrp->i, 128);
>> -        stw_p(&resgrp->maxstbl, 128);
>> -        resgrp->version = 0;
>>   
>> +        ClpReqQueryPciGrp *reqgrp = (ClpReqQueryPciGrp *)reqh;
>> +        S390PCIGroup *grp;
>> +
>> +        grp = s390_grp_find(reqgrp->g);
>> +        if (!grp) {
>> +            /* We do not allow access to unknown groups */
>> +            /* The group must have been obtained with a vfio device */
> 
> What about non-vfio devices? How does this whole feature work for
> emulated devices?

Emulated devices get a default group with predefined values.
The predefined values we used before this series.
I will modify the patch comment to explain the emulated devices case.
Thanks for the comments.

Regards,
Pierre


-- 
Pierre Morel
Linux/KVM/QEMU in Böblingen - Germany



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-05-16  8:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-10 14:38 [Qemu-devel] [PATCH 0/5] Retrieving zPCI specific info from QEMU Pierre Morel
2019-05-10 14:38 ` [Qemu-devel] [PATCH 1/5] vfio: vfio_iommu_type1: linux header place holder Pierre Morel
2019-05-12 18:22   ` Michael S. Tsirkin
2019-05-16  8:51     ` Pierre Morel
2019-05-10 14:38 ` [Qemu-devel] [PATCH 2/5] s390: PCI: Creation a header dedicated to PCI CLP Pierre Morel
2019-05-10 14:38 ` [Qemu-devel] [PATCH 3/5] s390: vfio_pci: Use a PCI Group structure Pierre Morel
2019-05-14 11:49   ` Cornelia Huck
2019-05-16  8:55     ` Pierre Morel
2019-05-10 14:38 ` [Qemu-devel] [PATCH 4/5] s390: vfio_pci: Use a PCI Function structure Pierre Morel
2019-05-10 14:38 ` [Qemu-devel] [PATCH 5/5] s390: vfio_pci: Get zPCI function info from host Pierre Morel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.