All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v1 00/11] iommu/virtio: vSVA support with Arm
@ 2021-04-23  9:51 ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

This patch series aims at enabling vSVA (Shared virtual addressing)
support in virtio-iommu driver for devices assigned to the guest
kernel using pci-passthrough technique (also called as Device
assignment). The changes are proposed to make virtio-iommu driver
become intelligent to support Shared virtual addressing.

The virtio-iommu device serves as the para-virtualized iommu for
IO devices running in the guest. This work is done for PCI devices
that are able to generate Address Translation Service (ATS) and
Page Request Interface (PRI) requests.

With vSVA support now devices running in guest can start using process
address space (Guest virtual address, GVA) for DMA. The below diagram
shows a simple system layout using iommus in guest and host.

                 ------------------
		 |  virtio-iommu  |
		 |    driver      |
		 |  (front-end)   |
		 |________________|
                                               Guest kernel
          --------------------------------------------------

                 ------------------
		 |  virtio-iommu  |
		 |    driver      |
		 |  (back-end)    |
		 |________________|
                                                    kvmtool
          --------------------------------------------------

                  -----------------
		 |  arm-smmu-v3   |
		 |    driver      |
		 |________________|
                                                Host kernel
          --------------------------------------------------

                  -----------------
		 |  arm-smmu-v3   |
		 |    hardware    |
		 |________________|
                                                   Hardware
          --------------------------------------------------

The flow of various messages/requests looks like below:
a) The stage-1 page tables are prepared by virtio-iommu driver using
   CPU page table info. This page table data then flow from guest kernel
   to host kernel using VFIO uapi changes [1]. The stage-1 page tables
   are then programmed into the hardware by the arm-smmu-v3 driver.
b) The device can then start initiating DMA transactions using ATS
   request.
c) When using GVA, SMMU encounters a translation fault and responds
   to the device with ATS success - translation failure.
d) The device can then send a PRI request that eventually populates
   the PRIQ of arm-smmu-v3.
e) The page fault info is captured from PRIQ, and sent to the guest
   kernel using VFIO dma fault region as added in [1].
f) The page fault is received on the virt-queue by virtio-iommu driver
   and is then passed to io-page-fault handler. The io page fault
   handler talks to mm fault handling engine in guest kernel and gets
   the CPU page tables updated.
g) Virtio-iommu driver then sends page_response backend virtio-iommu
   in vmm. From there this page_response info is passed to host kernel.
h) The arm-smmu-v3 driver running in the host will then use this page
   repsonse info and send a response to the requesting device.

The series use nested page table support [2] as the base to build vSVA
solution.

The changes are inspired from the SVA support in arm-smmu-v3 and are
making use of io-page-fault changes added by Jean Philippe [3].

The changes include:
- The mmu notifier structure, and allocation and freeing up of shared
  context descriptors are moved to arm-smmu-v3-cd-lib library driver,
  and changes have been made to arm-smmu-v3-sva driver to use these
  shared cd alloc/free helpers.
- In virtio-iommu driver:
  - changes have been added to add various iommu_ops to enable/disable
    feature, and to perform sva_bind/unbind. 
  - A iopf queue has been added that accepts incoming page faults, and
    work with mm fault handler to get the page resident.
  - Incoming page fault information from vmm is parsed using a work-queue
    and passed to the iopf fault handler.
  - A new virt-queue request has been added to send page response back
    to the vmm back-end driver. The page response is populated from the
    mm fault handler response, and includes information on Page Request
    pasid, group-id, and response code, etc.

[1] https://lore.kernel.org/linux-iommu/20210411114659.15051-1-eric.auger@redhat.com/
    https://lore.kernel.org/linux-iommu/20210411111228.14386-1-eric.auger@redhat.com/
[2] https://lore.kernel.org/linux-iommu/20210115121342.15093-1-vivek.gautam@arm.com/
[3] https://www.spinics.net/lists/arm-kernel/msg886518.html

Vivek Gautam (11):
  uapi/virtio-iommu: Add page request grp-id and flags information
  iommu/virtio: Maintain a list of endpoints served by viommu_dev
  iommu/virtio: Handle incoming page faults
  iommu/virtio: Add a io page fault queue
  iommu/virtio: Add SVA feature and related enable/disable callbacks
  iommu/pasid-table: Add pasid table ops for shared context management
  iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib
  iommu/arm-smmu-v3: Implement shared context alloc and free ops
  iommu/virtio: Implement sva bind/unbind calls
  uapi/virtio-iommu: Add a new request type to send page response
  iommu/virtio: Add support to send page response

 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 165 ++++-
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 230 +------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |   1 -
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |   2 -
 drivers/iommu/iommu-pasid-table.h             |  43 ++
 drivers/iommu/virtio-iommu.c                  | 634 +++++++++++++++++-
 include/uapi/linux/virtio_iommu.h             |  35 +
 7 files changed, 904 insertions(+), 206 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 00/11] iommu/virtio: vSVA support with Arm
@ 2021-04-23  9:51 ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, robin.murphy

This patch series aims at enabling vSVA (Shared virtual addressing)
support in virtio-iommu driver for devices assigned to the guest
kernel using pci-passthrough technique (also called as Device
assignment). The changes are proposed to make virtio-iommu driver
become intelligent to support Shared virtual addressing.

The virtio-iommu device serves as the para-virtualized iommu for
IO devices running in the guest. This work is done for PCI devices
that are able to generate Address Translation Service (ATS) and
Page Request Interface (PRI) requests.

With vSVA support now devices running in guest can start using process
address space (Guest virtual address, GVA) for DMA. The below diagram
shows a simple system layout using iommus in guest and host.

                 ------------------
		 |  virtio-iommu  |
		 |    driver      |
		 |  (front-end)   |
		 |________________|
                                               Guest kernel
          --------------------------------------------------

                 ------------------
		 |  virtio-iommu  |
		 |    driver      |
		 |  (back-end)    |
		 |________________|
                                                    kvmtool
          --------------------------------------------------

                  -----------------
		 |  arm-smmu-v3   |
		 |    driver      |
		 |________________|
                                                Host kernel
          --------------------------------------------------

                  -----------------
		 |  arm-smmu-v3   |
		 |    hardware    |
		 |________________|
                                                   Hardware
          --------------------------------------------------

The flow of various messages/requests looks like below:
a) The stage-1 page tables are prepared by virtio-iommu driver using
   CPU page table info. This page table data then flow from guest kernel
   to host kernel using VFIO uapi changes [1]. The stage-1 page tables
   are then programmed into the hardware by the arm-smmu-v3 driver.
b) The device can then start initiating DMA transactions using ATS
   request.
c) When using GVA, SMMU encounters a translation fault and responds
   to the device with ATS success - translation failure.
d) The device can then send a PRI request that eventually populates
   the PRIQ of arm-smmu-v3.
e) The page fault info is captured from PRIQ, and sent to the guest
   kernel using VFIO dma fault region as added in [1].
f) The page fault is received on the virt-queue by virtio-iommu driver
   and is then passed to io-page-fault handler. The io page fault
   handler talks to mm fault handling engine in guest kernel and gets
   the CPU page tables updated.
g) Virtio-iommu driver then sends page_response backend virtio-iommu
   in vmm. From there this page_response info is passed to host kernel.
h) The arm-smmu-v3 driver running in the host will then use this page
   repsonse info and send a response to the requesting device.

The series use nested page table support [2] as the base to build vSVA
solution.

The changes are inspired from the SVA support in arm-smmu-v3 and are
making use of io-page-fault changes added by Jean Philippe [3].

The changes include:
- The mmu notifier structure, and allocation and freeing up of shared
  context descriptors are moved to arm-smmu-v3-cd-lib library driver,
  and changes have been made to arm-smmu-v3-sva driver to use these
  shared cd alloc/free helpers.
- In virtio-iommu driver:
  - changes have been added to add various iommu_ops to enable/disable
    feature, and to perform sva_bind/unbind. 
  - A iopf queue has been added that accepts incoming page faults, and
    work with mm fault handler to get the page resident.
  - Incoming page fault information from vmm is parsed using a work-queue
    and passed to the iopf fault handler.
  - A new virt-queue request has been added to send page response back
    to the vmm back-end driver. The page response is populated from the
    mm fault handler response, and includes information on Page Request
    pasid, group-id, and response code, etc.

[1] https://lore.kernel.org/linux-iommu/20210411114659.15051-1-eric.auger@redhat.com/
    https://lore.kernel.org/linux-iommu/20210411111228.14386-1-eric.auger@redhat.com/
[2] https://lore.kernel.org/linux-iommu/20210115121342.15093-1-vivek.gautam@arm.com/
[3] https://www.spinics.net/lists/arm-kernel/msg886518.html

Vivek Gautam (11):
  uapi/virtio-iommu: Add page request grp-id and flags information
  iommu/virtio: Maintain a list of endpoints served by viommu_dev
  iommu/virtio: Handle incoming page faults
  iommu/virtio: Add a io page fault queue
  iommu/virtio: Add SVA feature and related enable/disable callbacks
  iommu/pasid-table: Add pasid table ops for shared context management
  iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib
  iommu/arm-smmu-v3: Implement shared context alloc and free ops
  iommu/virtio: Implement sva bind/unbind calls
  uapi/virtio-iommu: Add a new request type to send page response
  iommu/virtio: Add support to send page response

 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 165 ++++-
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 230 +------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |   1 -
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |   2 -
 drivers/iommu/iommu-pasid-table.h             |  43 ++
 drivers/iommu/virtio-iommu.c                  | 634 +++++++++++++++++-
 include/uapi/linux/virtio_iommu.h             |  35 +
 7 files changed, 904 insertions(+), 206 deletions(-)

-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 00/11] iommu/virtio: vSVA support with Arm
@ 2021-04-23  9:51 ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

This patch series aims at enabling vSVA (Shared virtual addressing)
support in virtio-iommu driver for devices assigned to the guest
kernel using pci-passthrough technique (also called as Device
assignment). The changes are proposed to make virtio-iommu driver
become intelligent to support Shared virtual addressing.

The virtio-iommu device serves as the para-virtualized iommu for
IO devices running in the guest. This work is done for PCI devices
that are able to generate Address Translation Service (ATS) and
Page Request Interface (PRI) requests.

With vSVA support now devices running in guest can start using process
address space (Guest virtual address, GVA) for DMA. The below diagram
shows a simple system layout using iommus in guest and host.

                 ------------------
		 |  virtio-iommu  |
		 |    driver      |
		 |  (front-end)   |
		 |________________|
                                               Guest kernel
          --------------------------------------------------

                 ------------------
		 |  virtio-iommu  |
		 |    driver      |
		 |  (back-end)    |
		 |________________|
                                                    kvmtool
          --------------------------------------------------

                  -----------------
		 |  arm-smmu-v3   |
		 |    driver      |
		 |________________|
                                                Host kernel
          --------------------------------------------------

                  -----------------
		 |  arm-smmu-v3   |
		 |    hardware    |
		 |________________|
                                                   Hardware
          --------------------------------------------------

The flow of various messages/requests looks like below:
a) The stage-1 page tables are prepared by virtio-iommu driver using
   CPU page table info. This page table data then flow from guest kernel
   to host kernel using VFIO uapi changes [1]. The stage-1 page tables
   are then programmed into the hardware by the arm-smmu-v3 driver.
b) The device can then start initiating DMA transactions using ATS
   request.
c) When using GVA, SMMU encounters a translation fault and responds
   to the device with ATS success - translation failure.
d) The device can then send a PRI request that eventually populates
   the PRIQ of arm-smmu-v3.
e) The page fault info is captured from PRIQ, and sent to the guest
   kernel using VFIO dma fault region as added in [1].
f) The page fault is received on the virt-queue by virtio-iommu driver
   and is then passed to io-page-fault handler. The io page fault
   handler talks to mm fault handling engine in guest kernel and gets
   the CPU page tables updated.
g) Virtio-iommu driver then sends page_response backend virtio-iommu
   in vmm. From there this page_response info is passed to host kernel.
h) The arm-smmu-v3 driver running in the host will then use this page
   repsonse info and send a response to the requesting device.

The series use nested page table support [2] as the base to build vSVA
solution.

The changes are inspired from the SVA support in arm-smmu-v3 and are
making use of io-page-fault changes added by Jean Philippe [3].

The changes include:
- The mmu notifier structure, and allocation and freeing up of shared
  context descriptors are moved to arm-smmu-v3-cd-lib library driver,
  and changes have been made to arm-smmu-v3-sva driver to use these
  shared cd alloc/free helpers.
- In virtio-iommu driver:
  - changes have been added to add various iommu_ops to enable/disable
    feature, and to perform sva_bind/unbind. 
  - A iopf queue has been added that accepts incoming page faults, and
    work with mm fault handler to get the page resident.
  - Incoming page fault information from vmm is parsed using a work-queue
    and passed to the iopf fault handler.
  - A new virt-queue request has been added to send page response back
    to the vmm back-end driver. The page response is populated from the
    mm fault handler response, and includes information on Page Request
    pasid, group-id, and response code, etc.

[1] https://lore.kernel.org/linux-iommu/20210411114659.15051-1-eric.auger@redhat.com/
    https://lore.kernel.org/linux-iommu/20210411111228.14386-1-eric.auger@redhat.com/
[2] https://lore.kernel.org/linux-iommu/20210115121342.15093-1-vivek.gautam@arm.com/
[3] https://www.spinics.net/lists/arm-kernel/msg886518.html

Vivek Gautam (11):
  uapi/virtio-iommu: Add page request grp-id and flags information
  iommu/virtio: Maintain a list of endpoints served by viommu_dev
  iommu/virtio: Handle incoming page faults
  iommu/virtio: Add a io page fault queue
  iommu/virtio: Add SVA feature and related enable/disable callbacks
  iommu/pasid-table: Add pasid table ops for shared context management
  iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib
  iommu/arm-smmu-v3: Implement shared context alloc and free ops
  iommu/virtio: Implement sva bind/unbind calls
  uapi/virtio-iommu: Add a new request type to send page response
  iommu/virtio: Add support to send page response

 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 165 ++++-
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 230 +------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |   1 -
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |   2 -
 drivers/iommu/iommu-pasid-table.h             |  43 ++
 drivers/iommu/virtio-iommu.c                  | 634 +++++++++++++++++-
 include/uapi/linux/virtio_iommu.h             |  35 +
 7 files changed, 904 insertions(+), 206 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam, Alex Williamson,
	Lorenzo Pieralisi

Add fault information for group-id and necessary flags for page
request faults that can be handled by page fault handler in
virtio-iommu driver.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
---
 include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index f8bf927a0689..accc3318ce46 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
 #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
 #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
 
+#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)
+
+#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
+
 struct virtio_iommu_fault {
 	__u8					reason;
 	__u8					reserved[3];
 	__le16					flt_type;
 	__u8					reserved2[2];
+	/* flags is actually permission flags */
 	__le32					flags;
+	/* flags for PASID and Page request handling info */
+	__le32					pr_evt_flags;
 	__le32					endpoint;
 	__le32					pasid;
+	__le32					grpid;
 	__u8					reserved3[4];
 	__le64					address;
 	__u8					reserved4[8];
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Alex Williamson,
	Vivek Gautam, robin.murphy

Add fault information for group-id and necessary flags for page
request faults that can be handled by page fault handler in
virtio-iommu driver.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
---
 include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index f8bf927a0689..accc3318ce46 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
 #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
 #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
 
+#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)
+
+#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
+
 struct virtio_iommu_fault {
 	__u8					reason;
 	__u8					reserved[3];
 	__le16					flt_type;
 	__u8					reserved2[2];
+	/* flags is actually permission flags */
 	__le32					flags;
+	/* flags for PASID and Page request handling info */
+	__le32					pr_evt_flags;
 	__le32					endpoint;
 	__le32					pasid;
+	__le32					grpid;
 	__u8					reserved3[4];
 	__le64					address;
 	__u8					reserved4[8];
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam, Alex Williamson,
	Lorenzo Pieralisi

Add fault information for group-id and necessary flags for page
request faults that can be handled by page fault handler in
virtio-iommu driver.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
---
 include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index f8bf927a0689..accc3318ce46 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
 #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
 #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
 
+#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)
+
+#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
+
 struct virtio_iommu_fault {
 	__u8					reason;
 	__u8					reserved[3];
 	__le16					flt_type;
 	__u8					reserved2[2];
+	/* flags is actually permission flags */
 	__le32					flags;
+	/* flags for PASID and Page request handling info */
+	__le32					pr_evt_flags;
 	__le32					endpoint;
 	__le32					pasid;
+	__le32					grpid;
 	__u8					reserved3[4];
 	__le64					address;
 	__u8					reserved4[8];
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Keeping a record of list of endpoints that are served by the virtio-iommu
device would help in redirecting the requests of page faults to the
correct endpoint device to handle such requests.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 50039070e2aa..c970f386f031 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -48,6 +48,7 @@ struct viommu_dev {
 	spinlock_t			request_lock;
 	struct list_head		requests;
 	void				*evts;
+	struct list_head		endpoints;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -115,6 +116,12 @@ struct viommu_endpoint {
 	void				*pgtf;
 };
 
+struct viommu_ep_entry {
+	u32				eid;
+	struct viommu_endpoint		*vdev;
+	struct list_head		list;
+};
+
 struct viommu_request {
 	struct list_head		list;
 	void				*writeback;
@@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	size_t probe_len;
 	struct virtio_iommu_req_probe *probe;
 	struct virtio_iommu_probe_property *prop;
+	struct viommu_ep_entry *ep;
 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 
@@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 		prop = (void *)probe->properties + cur;
 		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
 	}
+	if (ret)
+		goto out_free;
+
+	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
+	if (!ep) {
+		ret = -ENOMEM;
+		goto out_free;
+	}
+	ep->eid = probe->endpoint;
+	ep->vdev = vdev;
+
+	list_add(&ep->list, &viommu->endpoints);
 
 out_free:
 	kfree(probe);
@@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 	INIT_LIST_HEAD(&viommu->requests);
+	INIT_LIST_HEAD(&viommu->endpoints);
 
 	ret = viommu_init_vqs(viommu);
 	if (ret)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Keeping a record of list of endpoints that are served by the virtio-iommu
device would help in redirecting the requests of page faults to the
correct endpoint device to handle such requests.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 50039070e2aa..c970f386f031 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -48,6 +48,7 @@ struct viommu_dev {
 	spinlock_t			request_lock;
 	struct list_head		requests;
 	void				*evts;
+	struct list_head		endpoints;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -115,6 +116,12 @@ struct viommu_endpoint {
 	void				*pgtf;
 };
 
+struct viommu_ep_entry {
+	u32				eid;
+	struct viommu_endpoint		*vdev;
+	struct list_head		list;
+};
+
 struct viommu_request {
 	struct list_head		list;
 	void				*writeback;
@@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	size_t probe_len;
 	struct virtio_iommu_req_probe *probe;
 	struct virtio_iommu_probe_property *prop;
+	struct viommu_ep_entry *ep;
 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 
@@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 		prop = (void *)probe->properties + cur;
 		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
 	}
+	if (ret)
+		goto out_free;
+
+	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
+	if (!ep) {
+		ret = -ENOMEM;
+		goto out_free;
+	}
+	ep->eid = probe->endpoint;
+	ep->vdev = vdev;
+
+	list_add(&ep->list, &viommu->endpoints);
 
 out_free:
 	kfree(probe);
@@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 	INIT_LIST_HEAD(&viommu->requests);
+	INIT_LIST_HEAD(&viommu->endpoints);
 
 	ret = viommu_init_vqs(viommu);
 	if (ret)
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Keeping a record of list of endpoints that are served by the virtio-iommu
device would help in redirecting the requests of page faults to the
correct endpoint device to handle such requests.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 50039070e2aa..c970f386f031 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -48,6 +48,7 @@ struct viommu_dev {
 	spinlock_t			request_lock;
 	struct list_head		requests;
 	void				*evts;
+	struct list_head		endpoints;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -115,6 +116,12 @@ struct viommu_endpoint {
 	void				*pgtf;
 };
 
+struct viommu_ep_entry {
+	u32				eid;
+	struct viommu_endpoint		*vdev;
+	struct list_head		list;
+};
+
 struct viommu_request {
 	struct list_head		list;
 	void				*writeback;
@@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	size_t probe_len;
 	struct virtio_iommu_req_probe *probe;
 	struct virtio_iommu_probe_property *prop;
+	struct viommu_ep_entry *ep;
 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 
@@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 		prop = (void *)probe->properties + cur;
 		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
 	}
+	if (ret)
+		goto out_free;
+
+	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
+	if (!ep) {
+		ret = -ENOMEM;
+		goto out_free;
+	}
+	ep->eid = probe->endpoint;
+	ep->vdev = vdev;
+
+	list_add(&ep->list, &viommu->endpoints);
 
 out_free:
 	kfree(probe);
@@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 	INIT_LIST_HEAD(&viommu->requests);
+	INIT_LIST_HEAD(&viommu->endpoints);
 
 	ret = viommu_init_vqs(viommu);
 	if (ret)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Redirect the incoming page faults to the registered fault handler
that can take the fault information such as, pasid, page request
group-id, address and pasid flags.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_iommu.h |  1 +
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index c970f386f031..fd237cad1ce5 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -37,6 +37,13 @@
 /* Some architectures need an Address Space ID for each page table */
 DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
+struct viommu_dev_pri_work {
+	struct work_struct		work;
+	struct viommu_dev		*dev;
+	struct virtio_iommu_fault	*vfault;
+	u32				endpoint;
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -49,6 +56,8 @@ struct viommu_dev {
 	struct list_head		requests;
 	void				*evts;
 	struct list_head		endpoints;
+	struct workqueue_struct		*pri_wq;
+	struct viommu_dev_pri_work	*pri_work;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static void viommu_handle_ppr(struct work_struct *work)
+{
+	struct viommu_dev_pri_work *pwork =
+				container_of(work, struct viommu_dev_pri_work, work);
+	struct viommu_dev *viommu = pwork->dev;
+	struct virtio_iommu_fault *vfault = pwork->vfault;
+	struct viommu_endpoint *vdev;
+	struct viommu_ep_entry *ep;
+	struct iommu_fault_event fault_evt = {
+		.fault.type = IOMMU_FAULT_PAGE_REQ,
+	};
+	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
+
+	u32 flags	= le32_to_cpu(vfault->flags);
+	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
+	u32 endpoint	= pwork->endpoint;
+
+	memset(prq, 0, sizeof(struct iommu_fault_page_request));
+	prq->addr = le64_to_cpu(vfault->address);
+
+	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
+		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
+		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+		prq->pasid = le32_to_cpu(vfault->pasid);
+		prq->grpid = le32_to_cpu(vfault->grpid);
+	}
+
+	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
+		prq->perm |= IOMMU_FAULT_PERM_READ;
+	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
+		prq->perm |= IOMMU_FAULT_PERM_WRITE;
+	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
+		prq->perm |= IOMMU_FAULT_PERM_EXEC;
+	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
+		prq->perm |= IOMMU_FAULT_PERM_PRIV;
+
+	list_for_each_entry(ep, &viommu->endpoints, list) {
+		if (ep->eid == endpoint) {
+			vdev = ep->vdev;
+			break;
+		}
+	}
+
+	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
+	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
+		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
+
+	if (iommu_report_device_fault(vdev->dev, &fault_evt))
+		dev_err(vdev->dev, "Couldn't handle page request\n");
+}
+
 static int viommu_fault_handler(struct viommu_dev *viommu,
 				struct virtio_iommu_fault *fault)
 {
@@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
 	u32 pasid	= le32_to_cpu(fault->pasid);
 
 	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
-		dev_info(viommu->dev, "Page request fault - unhandled\n");
+		dev_info_ratelimited(viommu->dev,
+				     "Page request fault from EP %u\n",
+				     endpoint);
+
+		viommu->pri_work->vfault = fault;
+		viommu->pri_work->endpoint = endpoint;
+		queue_work(viommu->pri_wq, &viommu->pri_work->work);
 		return 0;
 	}
 
@@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
 		goto err_free_vqs;
 	}
 
+	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
+	if (!viommu->pri_work)
+		return -ENOMEM;
+
+	viommu->pri_work->dev = viommu;
+
+	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
+	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
+	if (!viommu->pri_wq)
+		return -ENOMEM;
+
 	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
 	viommu->last_domain = ~0U;
 
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index accc3318ce46..53aa88e6b077 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
 #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
 #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
 #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)
 #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
 
 #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Redirect the incoming page faults to the registered fault handler
that can take the fault information such as, pasid, page request
group-id, address and pasid flags.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_iommu.h |  1 +
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index c970f386f031..fd237cad1ce5 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -37,6 +37,13 @@
 /* Some architectures need an Address Space ID for each page table */
 DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
+struct viommu_dev_pri_work {
+	struct work_struct		work;
+	struct viommu_dev		*dev;
+	struct virtio_iommu_fault	*vfault;
+	u32				endpoint;
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -49,6 +56,8 @@ struct viommu_dev {
 	struct list_head		requests;
 	void				*evts;
 	struct list_head		endpoints;
+	struct workqueue_struct		*pri_wq;
+	struct viommu_dev_pri_work	*pri_work;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static void viommu_handle_ppr(struct work_struct *work)
+{
+	struct viommu_dev_pri_work *pwork =
+				container_of(work, struct viommu_dev_pri_work, work);
+	struct viommu_dev *viommu = pwork->dev;
+	struct virtio_iommu_fault *vfault = pwork->vfault;
+	struct viommu_endpoint *vdev;
+	struct viommu_ep_entry *ep;
+	struct iommu_fault_event fault_evt = {
+		.fault.type = IOMMU_FAULT_PAGE_REQ,
+	};
+	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
+
+	u32 flags	= le32_to_cpu(vfault->flags);
+	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
+	u32 endpoint	= pwork->endpoint;
+
+	memset(prq, 0, sizeof(struct iommu_fault_page_request));
+	prq->addr = le64_to_cpu(vfault->address);
+
+	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
+		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
+		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+		prq->pasid = le32_to_cpu(vfault->pasid);
+		prq->grpid = le32_to_cpu(vfault->grpid);
+	}
+
+	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
+		prq->perm |= IOMMU_FAULT_PERM_READ;
+	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
+		prq->perm |= IOMMU_FAULT_PERM_WRITE;
+	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
+		prq->perm |= IOMMU_FAULT_PERM_EXEC;
+	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
+		prq->perm |= IOMMU_FAULT_PERM_PRIV;
+
+	list_for_each_entry(ep, &viommu->endpoints, list) {
+		if (ep->eid == endpoint) {
+			vdev = ep->vdev;
+			break;
+		}
+	}
+
+	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
+	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
+		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
+
+	if (iommu_report_device_fault(vdev->dev, &fault_evt))
+		dev_err(vdev->dev, "Couldn't handle page request\n");
+}
+
 static int viommu_fault_handler(struct viommu_dev *viommu,
 				struct virtio_iommu_fault *fault)
 {
@@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
 	u32 pasid	= le32_to_cpu(fault->pasid);
 
 	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
-		dev_info(viommu->dev, "Page request fault - unhandled\n");
+		dev_info_ratelimited(viommu->dev,
+				     "Page request fault from EP %u\n",
+				     endpoint);
+
+		viommu->pri_work->vfault = fault;
+		viommu->pri_work->endpoint = endpoint;
+		queue_work(viommu->pri_wq, &viommu->pri_work->work);
 		return 0;
 	}
 
@@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
 		goto err_free_vqs;
 	}
 
+	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
+	if (!viommu->pri_work)
+		return -ENOMEM;
+
+	viommu->pri_work->dev = viommu;
+
+	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
+	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
+	if (!viommu->pri_wq)
+		return -ENOMEM;
+
 	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
 	viommu->last_domain = ~0U;
 
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index accc3318ce46..53aa88e6b077 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
 #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
 #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
 #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)
 #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
 
 #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Redirect the incoming page faults to the registered fault handler
that can take the fault information such as, pasid, page request
group-id, address and pasid flags.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_iommu.h |  1 +
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index c970f386f031..fd237cad1ce5 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -37,6 +37,13 @@
 /* Some architectures need an Address Space ID for each page table */
 DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
+struct viommu_dev_pri_work {
+	struct work_struct		work;
+	struct viommu_dev		*dev;
+	struct virtio_iommu_fault	*vfault;
+	u32				endpoint;
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -49,6 +56,8 @@ struct viommu_dev {
 	struct list_head		requests;
 	void				*evts;
 	struct list_head		endpoints;
+	struct workqueue_struct		*pri_wq;
+	struct viommu_dev_pri_work	*pri_work;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static void viommu_handle_ppr(struct work_struct *work)
+{
+	struct viommu_dev_pri_work *pwork =
+				container_of(work, struct viommu_dev_pri_work, work);
+	struct viommu_dev *viommu = pwork->dev;
+	struct virtio_iommu_fault *vfault = pwork->vfault;
+	struct viommu_endpoint *vdev;
+	struct viommu_ep_entry *ep;
+	struct iommu_fault_event fault_evt = {
+		.fault.type = IOMMU_FAULT_PAGE_REQ,
+	};
+	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
+
+	u32 flags	= le32_to_cpu(vfault->flags);
+	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
+	u32 endpoint	= pwork->endpoint;
+
+	memset(prq, 0, sizeof(struct iommu_fault_page_request));
+	prq->addr = le64_to_cpu(vfault->address);
+
+	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
+		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
+		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+		prq->pasid = le32_to_cpu(vfault->pasid);
+		prq->grpid = le32_to_cpu(vfault->grpid);
+	}
+
+	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
+		prq->perm |= IOMMU_FAULT_PERM_READ;
+	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
+		prq->perm |= IOMMU_FAULT_PERM_WRITE;
+	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
+		prq->perm |= IOMMU_FAULT_PERM_EXEC;
+	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
+		prq->perm |= IOMMU_FAULT_PERM_PRIV;
+
+	list_for_each_entry(ep, &viommu->endpoints, list) {
+		if (ep->eid == endpoint) {
+			vdev = ep->vdev;
+			break;
+		}
+	}
+
+	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
+	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
+		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
+
+	if (iommu_report_device_fault(vdev->dev, &fault_evt))
+		dev_err(vdev->dev, "Couldn't handle page request\n");
+}
+
 static int viommu_fault_handler(struct viommu_dev *viommu,
 				struct virtio_iommu_fault *fault)
 {
@@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
 	u32 pasid	= le32_to_cpu(fault->pasid);
 
 	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
-		dev_info(viommu->dev, "Page request fault - unhandled\n");
+		dev_info_ratelimited(viommu->dev,
+				     "Page request fault from EP %u\n",
+				     endpoint);
+
+		viommu->pri_work->vfault = fault;
+		viommu->pri_work->endpoint = endpoint;
+		queue_work(viommu->pri_wq, &viommu->pri_work->work);
 		return 0;
 	}
 
@@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
 		goto err_free_vqs;
 	}
 
+	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
+	if (!viommu->pri_work)
+		return -ENOMEM;
+
+	viommu->pri_work->dev = viommu;
+
+	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
+	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
+	if (!viommu->pri_wq)
+		return -ENOMEM;
+
 	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
 	viommu->last_domain = ~0U;
 
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index accc3318ce46..53aa88e6b077 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
 #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
 #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
 #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)
 #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
 
 #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 04/11] iommu/virtio: Add a io page fault queue
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add a io page fault queue to the driver so that it can be used
to redirect faults to the handler.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index fd237cad1ce5..3da5f0807711 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -58,6 +58,7 @@ struct viommu_dev {
 	struct list_head		endpoints;
 	struct workqueue_struct		*pri_wq;
 	struct viommu_dev_pri_work	*pri_work;
+	struct iopf_queue		*iopf_pri;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -581,6 +582,15 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
 	return 0;
 }
 
+static int viommu_init_queues(struct viommu_dev *viommu)
+{
+	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
+	if (!viommu->iopf_pri)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 {
 	int ret;
@@ -670,6 +680,15 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 
 	list_add(&ep->list, &viommu->endpoints);
 
+	ret = viommu_init_queues(viommu);
+	if (ret)
+		goto out_free_eps;
+
+	kfree(probe);
+	return 0;
+
+out_free_eps:
+	kfree(ep);
 out_free:
 	kfree(probe);
 	return ret;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 04/11] iommu/virtio: Add a io page fault queue
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Add a io page fault queue to the driver so that it can be used
to redirect faults to the handler.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index fd237cad1ce5..3da5f0807711 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -58,6 +58,7 @@ struct viommu_dev {
 	struct list_head		endpoints;
 	struct workqueue_struct		*pri_wq;
 	struct viommu_dev_pri_work	*pri_work;
+	struct iopf_queue		*iopf_pri;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -581,6 +582,15 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
 	return 0;
 }
 
+static int viommu_init_queues(struct viommu_dev *viommu)
+{
+	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
+	if (!viommu->iopf_pri)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 {
 	int ret;
@@ -670,6 +680,15 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 
 	list_add(&ep->list, &viommu->endpoints);
 
+	ret = viommu_init_queues(viommu);
+	if (ret)
+		goto out_free_eps;
+
+	kfree(probe);
+	return 0;
+
+out_free_eps:
+	kfree(ep);
 out_free:
 	kfree(probe);
 	return ret;
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 04/11] iommu/virtio: Add a io page fault queue
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add a io page fault queue to the driver so that it can be used
to redirect faults to the handler.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index fd237cad1ce5..3da5f0807711 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -58,6 +58,7 @@ struct viommu_dev {
 	struct list_head		endpoints;
 	struct workqueue_struct		*pri_wq;
 	struct viommu_dev_pri_work	*pri_work;
+	struct iopf_queue		*iopf_pri;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -581,6 +582,15 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
 	return 0;
 }
 
+static int viommu_init_queues(struct viommu_dev *viommu)
+{
+	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
+	if (!viommu->iopf_pri)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 {
 	int ret;
@@ -670,6 +680,15 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 
 	list_add(&ep->list, &viommu->endpoints);
 
+	ret = viommu_init_queues(viommu);
+	if (ret)
+		goto out_free_eps;
+
+	kfree(probe);
+	return 0;
+
+out_free_eps:
+	kfree(ep);
 out_free:
 	kfree(probe);
 	return ret;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add a feature flag to virtio iommu for Shared virtual addressing
(SVA). This feature would indicate the availablily path for handling
device page faults, and the provision for sending page response.
Also add necessary methods to enable and disable SVA so that the
masters can enable the SVA path. This also requires enabling the
PRI capability on the device.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_iommu.h |   1 +
 2 files changed, 269 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 3da5f0807711..250c137a211b 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -18,6 +18,7 @@
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
 #include <linux/pci.h>
+#include <linux/pci-ats.h>
 #include <linux/platform_device.h>
 #include <linux/virtio.h>
 #include <linux/virtio_config.h>
@@ -26,6 +27,7 @@
 
 #include <uapi/linux/virtio_iommu.h>
 #include "iommu-pasid-table.h"
+#include "iommu-sva-lib.h"
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -37,6 +39,9 @@
 /* Some architectures need an Address Space ID for each page table */
 DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
+static DEFINE_MUTEX(sva_lock);
+static DEFINE_MUTEX(iopf_lock);
+
 struct viommu_dev_pri_work {
 	struct work_struct		work;
 	struct viommu_dev		*dev;
@@ -71,6 +76,7 @@ struct viommu_dev {
 
 	bool				has_map:1;
 	bool				has_table:1;
+	bool				has_sva:1;
 };
 
 struct viommu_mapping {
@@ -124,6 +130,12 @@ struct viommu_endpoint {
 	void				*pstf;
 	/* Preferred page table format */
 	void				*pgtf;
+
+	/* sva */
+	bool				ats_supported;
+	bool				pri_supported;
+	bool				sva_enabled;
+	bool				iopf_enabled;
 };
 
 struct viommu_ep_entry {
@@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
 	return 0;
 }
 
+static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
+{
+	struct device *dev = vdev->dev;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (!dev_is_pci(vdev->dev))
+		return -EINVAL;
+
+	if (pci_ats_supported(pdev))
+		vdev->ats_supported = true;
+
+	if (pci_pri_supported(pdev))
+		vdev->pri_supported = true;
+
+	return 0;
+}
+
+static int viommu_enable_pri(struct viommu_endpoint *vdev)
+{
+	int ret;
+	struct pci_dev *pdev;
+
+	/* Let's allow only 4 requests for PRI right now */
+	size_t max_inflight_pprs = 4;
+
+	if (!vdev->pri_supported || !vdev->ats_supported)
+		return -ENODEV;
+
+	pdev = to_pci_dev(vdev->dev);
+
+	ret = pci_reset_pri(pdev);
+	if (ret)
+		return ret;
+
+	ret = pci_enable_pri(pdev, max_inflight_pprs);
+	if (ret) {
+		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void viommu_disable_pri(struct viommu_endpoint *vdev)
+{
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(vdev->dev))
+		return;
+
+	pdev = to_pci_dev(vdev->dev);
+
+	if (!pdev->pri_enabled)
+		return;
+
+	pci_disable_pri(pdev);
+}
+
 static int viommu_init_queues(struct viommu_dev *viommu)
 {
 	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
@@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	if (ret)
 		goto out_free_eps;
 
+	ret = viommu_init_ats_pri(vdev);
+	if (ret)
+		goto out_free_eps;
+
 	kfree(probe);
 	return 0;
 
@@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
+{
+	/* TODO: support Stall model later */
+	return vdev->pri_supported;
+}
+
+bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
+{
+	struct viommu_dev *viommu = vdev->viommu;
+
+	if (!viommu->has_sva)
+		return false;
+
+	return vdev->pasid_bits;
+}
+
+bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
+{
+	bool enabled;
+
+	mutex_lock(&sva_lock);
+	enabled = vdev->sva_enabled;
+	mutex_unlock(&sva_lock);
+	return enabled;
+}
+
+static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
+{
+	int ret;
+	struct device *dev = vdev->dev;
+
+	if (!viommu_endpoint_iopf_supported(vdev))
+		return 0;
+
+	if (!vdev->iopf_enabled)
+		return -EINVAL;
+
+	if (vdev->pri_supported) {
+		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
+		if (ret)
+			return ret;
+	} else {
+		/* No other way to handle io page fault then */
+		return -EINVAL;
+	}
+
+	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
+	if (ret)
+		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
+
+	return ret;
+}
+
+static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
+{
+	struct device *dev = vdev->dev;
+
+	if (!vdev->iopf_enabled)
+		return;
+
+	iommu_unregister_device_fault_handler(dev);
+	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
+}
+
+static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
+{
+	int ret;
+
+	mutex_lock(&sva_lock);
+	ret = viommu_endpoint_sva_enable_iopf(vdev);
+	if (!ret)
+		vdev->sva_enabled = true;
+	mutex_unlock(&sva_lock);
+
+	return ret;
+}
+
+static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
+{
+	mutex_lock(&sva_lock);
+	viommu_endpoint_sva_disable_iopf(vdev);
+	vdev->sva_enabled = false;
+	mutex_unlock(&sva_lock);
+
+	return 0;
+}
+
+int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
+{
+	int ret;
+
+	mutex_lock(&iopf_lock);
+	if (vdev->pri_supported) {
+		ret = viommu_enable_pri(vdev);
+		if (ret)
+			return ret;
+	}
+	vdev->iopf_enabled = true;
+	mutex_unlock(&iopf_lock);
+	return 0;
+}
+
+int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
+{
+	if (vdev->sva_enabled)
+		return -EBUSY;
+
+	mutex_lock(&iopf_lock);
+	viommu_disable_pri(vdev);
+	vdev->iopf_enabled = false;
+	mutex_unlock(&iopf_lock);
+	return 0;
+}
+
+static bool viommu_dev_has_feature(struct device *dev,
+				   enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!vdev)
+		return false;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_iopf_supported(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_sva_supported(vdev);
+	default:
+		return false;
+	}
+}
+
+static bool viommu_dev_feature_enabled(struct device *dev,
+				       enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!vdev)
+		return false;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return vdev->iopf_enabled;
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_sva_enabled(vdev);
+	default:
+		return false;
+	}
+}
+
+static int viommu_dev_enable_feature(struct device *dev,
+				     enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!viommu_dev_has_feature(dev, feat))
+		return -ENODEV;
+
+	if (viommu_dev_feature_enabled(dev, feat))
+		return -EBUSY;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_enable_iopf(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_enable_sva(vdev);
+	default:
+		return -EINVAL;
+	}
+}
+
+static int viommu_dev_disable_feature(struct device *dev,
+				      enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!viommu_dev_feature_enabled(dev, feat))
+		return -EINVAL;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_disable_iopf(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_disable_sva(vdev);
+	default:
+		return -EINVAL;
+	}
+}
 static struct iommu_ops viommu_ops = {
 	.domain_alloc		= viommu_domain_alloc,
 	.domain_free		= viommu_domain_free,
@@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
 	.get_resv_regions	= viommu_get_resv_regions,
 	.put_resv_regions	= generic_iommu_put_resv_regions,
 	.of_xlate		= viommu_of_xlate,
+	.dev_feat_enabled	= viommu_dev_feature_enabled,
+	.dev_enable_feat	= viommu_dev_enable_feature,
+	.dev_disable_feat	= viommu_dev_disable_feature,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
@@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
 		goto err_free_vqs;
 	}
 
+	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -1902,6 +2169,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_PROBE,
 	VIRTIO_IOMMU_F_MMIO,
 	VIRTIO_IOMMU_F_ATTACH_TABLE,
+	VIRTIO_IOMMU_F_SVA,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 53aa88e6b077..88a3db493108 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -17,6 +17,7 @@
 #define VIRTIO_IOMMU_F_PROBE			4
 #define VIRTIO_IOMMU_F_MMIO			5
 #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
+#define VIRTIO_IOMMU_F_SVA			7
 
 struct virtio_iommu_range_64 {
 	__le64					start;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Add a feature flag to virtio iommu for Shared virtual addressing
(SVA). This feature would indicate the availablily path for handling
device page faults, and the provision for sending page response.
Also add necessary methods to enable and disable SVA so that the
masters can enable the SVA path. This also requires enabling the
PRI capability on the device.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_iommu.h |   1 +
 2 files changed, 269 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 3da5f0807711..250c137a211b 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -18,6 +18,7 @@
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
 #include <linux/pci.h>
+#include <linux/pci-ats.h>
 #include <linux/platform_device.h>
 #include <linux/virtio.h>
 #include <linux/virtio_config.h>
@@ -26,6 +27,7 @@
 
 #include <uapi/linux/virtio_iommu.h>
 #include "iommu-pasid-table.h"
+#include "iommu-sva-lib.h"
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -37,6 +39,9 @@
 /* Some architectures need an Address Space ID for each page table */
 DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
+static DEFINE_MUTEX(sva_lock);
+static DEFINE_MUTEX(iopf_lock);
+
 struct viommu_dev_pri_work {
 	struct work_struct		work;
 	struct viommu_dev		*dev;
@@ -71,6 +76,7 @@ struct viommu_dev {
 
 	bool				has_map:1;
 	bool				has_table:1;
+	bool				has_sva:1;
 };
 
 struct viommu_mapping {
@@ -124,6 +130,12 @@ struct viommu_endpoint {
 	void				*pstf;
 	/* Preferred page table format */
 	void				*pgtf;
+
+	/* sva */
+	bool				ats_supported;
+	bool				pri_supported;
+	bool				sva_enabled;
+	bool				iopf_enabled;
 };
 
 struct viommu_ep_entry {
@@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
 	return 0;
 }
 
+static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
+{
+	struct device *dev = vdev->dev;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (!dev_is_pci(vdev->dev))
+		return -EINVAL;
+
+	if (pci_ats_supported(pdev))
+		vdev->ats_supported = true;
+
+	if (pci_pri_supported(pdev))
+		vdev->pri_supported = true;
+
+	return 0;
+}
+
+static int viommu_enable_pri(struct viommu_endpoint *vdev)
+{
+	int ret;
+	struct pci_dev *pdev;
+
+	/* Let's allow only 4 requests for PRI right now */
+	size_t max_inflight_pprs = 4;
+
+	if (!vdev->pri_supported || !vdev->ats_supported)
+		return -ENODEV;
+
+	pdev = to_pci_dev(vdev->dev);
+
+	ret = pci_reset_pri(pdev);
+	if (ret)
+		return ret;
+
+	ret = pci_enable_pri(pdev, max_inflight_pprs);
+	if (ret) {
+		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void viommu_disable_pri(struct viommu_endpoint *vdev)
+{
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(vdev->dev))
+		return;
+
+	pdev = to_pci_dev(vdev->dev);
+
+	if (!pdev->pri_enabled)
+		return;
+
+	pci_disable_pri(pdev);
+}
+
 static int viommu_init_queues(struct viommu_dev *viommu)
 {
 	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
@@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	if (ret)
 		goto out_free_eps;
 
+	ret = viommu_init_ats_pri(vdev);
+	if (ret)
+		goto out_free_eps;
+
 	kfree(probe);
 	return 0;
 
@@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
+{
+	/* TODO: support Stall model later */
+	return vdev->pri_supported;
+}
+
+bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
+{
+	struct viommu_dev *viommu = vdev->viommu;
+
+	if (!viommu->has_sva)
+		return false;
+
+	return vdev->pasid_bits;
+}
+
+bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
+{
+	bool enabled;
+
+	mutex_lock(&sva_lock);
+	enabled = vdev->sva_enabled;
+	mutex_unlock(&sva_lock);
+	return enabled;
+}
+
+static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
+{
+	int ret;
+	struct device *dev = vdev->dev;
+
+	if (!viommu_endpoint_iopf_supported(vdev))
+		return 0;
+
+	if (!vdev->iopf_enabled)
+		return -EINVAL;
+
+	if (vdev->pri_supported) {
+		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
+		if (ret)
+			return ret;
+	} else {
+		/* No other way to handle io page fault then */
+		return -EINVAL;
+	}
+
+	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
+	if (ret)
+		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
+
+	return ret;
+}
+
+static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
+{
+	struct device *dev = vdev->dev;
+
+	if (!vdev->iopf_enabled)
+		return;
+
+	iommu_unregister_device_fault_handler(dev);
+	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
+}
+
+static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
+{
+	int ret;
+
+	mutex_lock(&sva_lock);
+	ret = viommu_endpoint_sva_enable_iopf(vdev);
+	if (!ret)
+		vdev->sva_enabled = true;
+	mutex_unlock(&sva_lock);
+
+	return ret;
+}
+
+static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
+{
+	mutex_lock(&sva_lock);
+	viommu_endpoint_sva_disable_iopf(vdev);
+	vdev->sva_enabled = false;
+	mutex_unlock(&sva_lock);
+
+	return 0;
+}
+
+int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
+{
+	int ret;
+
+	mutex_lock(&iopf_lock);
+	if (vdev->pri_supported) {
+		ret = viommu_enable_pri(vdev);
+		if (ret)
+			return ret;
+	}
+	vdev->iopf_enabled = true;
+	mutex_unlock(&iopf_lock);
+	return 0;
+}
+
+int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
+{
+	if (vdev->sva_enabled)
+		return -EBUSY;
+
+	mutex_lock(&iopf_lock);
+	viommu_disable_pri(vdev);
+	vdev->iopf_enabled = false;
+	mutex_unlock(&iopf_lock);
+	return 0;
+}
+
+static bool viommu_dev_has_feature(struct device *dev,
+				   enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!vdev)
+		return false;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_iopf_supported(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_sva_supported(vdev);
+	default:
+		return false;
+	}
+}
+
+static bool viommu_dev_feature_enabled(struct device *dev,
+				       enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!vdev)
+		return false;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return vdev->iopf_enabled;
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_sva_enabled(vdev);
+	default:
+		return false;
+	}
+}
+
+static int viommu_dev_enable_feature(struct device *dev,
+				     enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!viommu_dev_has_feature(dev, feat))
+		return -ENODEV;
+
+	if (viommu_dev_feature_enabled(dev, feat))
+		return -EBUSY;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_enable_iopf(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_enable_sva(vdev);
+	default:
+		return -EINVAL;
+	}
+}
+
+static int viommu_dev_disable_feature(struct device *dev,
+				      enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!viommu_dev_feature_enabled(dev, feat))
+		return -EINVAL;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_disable_iopf(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_disable_sva(vdev);
+	default:
+		return -EINVAL;
+	}
+}
 static struct iommu_ops viommu_ops = {
 	.domain_alloc		= viommu_domain_alloc,
 	.domain_free		= viommu_domain_free,
@@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
 	.get_resv_regions	= viommu_get_resv_regions,
 	.put_resv_regions	= generic_iommu_put_resv_regions,
 	.of_xlate		= viommu_of_xlate,
+	.dev_feat_enabled	= viommu_dev_feature_enabled,
+	.dev_enable_feat	= viommu_dev_enable_feature,
+	.dev_disable_feat	= viommu_dev_disable_feature,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
@@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
 		goto err_free_vqs;
 	}
 
+	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -1902,6 +2169,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_PROBE,
 	VIRTIO_IOMMU_F_MMIO,
 	VIRTIO_IOMMU_F_ATTACH_TABLE,
+	VIRTIO_IOMMU_F_SVA,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 53aa88e6b077..88a3db493108 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -17,6 +17,7 @@
 #define VIRTIO_IOMMU_F_PROBE			4
 #define VIRTIO_IOMMU_F_MMIO			5
 #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
+#define VIRTIO_IOMMU_F_SVA			7
 
 struct virtio_iommu_range_64 {
 	__le64					start;
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add a feature flag to virtio iommu for Shared virtual addressing
(SVA). This feature would indicate the availablily path for handling
device page faults, and the provision for sending page response.
Also add necessary methods to enable and disable SVA so that the
masters can enable the SVA path. This also requires enabling the
PRI capability on the device.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_iommu.h |   1 +
 2 files changed, 269 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 3da5f0807711..250c137a211b 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -18,6 +18,7 @@
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
 #include <linux/pci.h>
+#include <linux/pci-ats.h>
 #include <linux/platform_device.h>
 #include <linux/virtio.h>
 #include <linux/virtio_config.h>
@@ -26,6 +27,7 @@
 
 #include <uapi/linux/virtio_iommu.h>
 #include "iommu-pasid-table.h"
+#include "iommu-sva-lib.h"
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -37,6 +39,9 @@
 /* Some architectures need an Address Space ID for each page table */
 DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
+static DEFINE_MUTEX(sva_lock);
+static DEFINE_MUTEX(iopf_lock);
+
 struct viommu_dev_pri_work {
 	struct work_struct		work;
 	struct viommu_dev		*dev;
@@ -71,6 +76,7 @@ struct viommu_dev {
 
 	bool				has_map:1;
 	bool				has_table:1;
+	bool				has_sva:1;
 };
 
 struct viommu_mapping {
@@ -124,6 +130,12 @@ struct viommu_endpoint {
 	void				*pstf;
 	/* Preferred page table format */
 	void				*pgtf;
+
+	/* sva */
+	bool				ats_supported;
+	bool				pri_supported;
+	bool				sva_enabled;
+	bool				iopf_enabled;
 };
 
 struct viommu_ep_entry {
@@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
 	return 0;
 }
 
+static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
+{
+	struct device *dev = vdev->dev;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (!dev_is_pci(vdev->dev))
+		return -EINVAL;
+
+	if (pci_ats_supported(pdev))
+		vdev->ats_supported = true;
+
+	if (pci_pri_supported(pdev))
+		vdev->pri_supported = true;
+
+	return 0;
+}
+
+static int viommu_enable_pri(struct viommu_endpoint *vdev)
+{
+	int ret;
+	struct pci_dev *pdev;
+
+	/* Let's allow only 4 requests for PRI right now */
+	size_t max_inflight_pprs = 4;
+
+	if (!vdev->pri_supported || !vdev->ats_supported)
+		return -ENODEV;
+
+	pdev = to_pci_dev(vdev->dev);
+
+	ret = pci_reset_pri(pdev);
+	if (ret)
+		return ret;
+
+	ret = pci_enable_pri(pdev, max_inflight_pprs);
+	if (ret) {
+		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void viommu_disable_pri(struct viommu_endpoint *vdev)
+{
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(vdev->dev))
+		return;
+
+	pdev = to_pci_dev(vdev->dev);
+
+	if (!pdev->pri_enabled)
+		return;
+
+	pci_disable_pri(pdev);
+}
+
 static int viommu_init_queues(struct viommu_dev *viommu)
 {
 	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
@@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	if (ret)
 		goto out_free_eps;
 
+	ret = viommu_init_ats_pri(vdev);
+	if (ret)
+		goto out_free_eps;
+
 	kfree(probe);
 	return 0;
 
@@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
+{
+	/* TODO: support Stall model later */
+	return vdev->pri_supported;
+}
+
+bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
+{
+	struct viommu_dev *viommu = vdev->viommu;
+
+	if (!viommu->has_sva)
+		return false;
+
+	return vdev->pasid_bits;
+}
+
+bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
+{
+	bool enabled;
+
+	mutex_lock(&sva_lock);
+	enabled = vdev->sva_enabled;
+	mutex_unlock(&sva_lock);
+	return enabled;
+}
+
+static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
+{
+	int ret;
+	struct device *dev = vdev->dev;
+
+	if (!viommu_endpoint_iopf_supported(vdev))
+		return 0;
+
+	if (!vdev->iopf_enabled)
+		return -EINVAL;
+
+	if (vdev->pri_supported) {
+		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
+		if (ret)
+			return ret;
+	} else {
+		/* No other way to handle io page fault then */
+		return -EINVAL;
+	}
+
+	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
+	if (ret)
+		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
+
+	return ret;
+}
+
+static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
+{
+	struct device *dev = vdev->dev;
+
+	if (!vdev->iopf_enabled)
+		return;
+
+	iommu_unregister_device_fault_handler(dev);
+	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
+}
+
+static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
+{
+	int ret;
+
+	mutex_lock(&sva_lock);
+	ret = viommu_endpoint_sva_enable_iopf(vdev);
+	if (!ret)
+		vdev->sva_enabled = true;
+	mutex_unlock(&sva_lock);
+
+	return ret;
+}
+
+static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
+{
+	mutex_lock(&sva_lock);
+	viommu_endpoint_sva_disable_iopf(vdev);
+	vdev->sva_enabled = false;
+	mutex_unlock(&sva_lock);
+
+	return 0;
+}
+
+int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
+{
+	int ret;
+
+	mutex_lock(&iopf_lock);
+	if (vdev->pri_supported) {
+		ret = viommu_enable_pri(vdev);
+		if (ret)
+			return ret;
+	}
+	vdev->iopf_enabled = true;
+	mutex_unlock(&iopf_lock);
+	return 0;
+}
+
+int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
+{
+	if (vdev->sva_enabled)
+		return -EBUSY;
+
+	mutex_lock(&iopf_lock);
+	viommu_disable_pri(vdev);
+	vdev->iopf_enabled = false;
+	mutex_unlock(&iopf_lock);
+	return 0;
+}
+
+static bool viommu_dev_has_feature(struct device *dev,
+				   enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!vdev)
+		return false;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_iopf_supported(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_sva_supported(vdev);
+	default:
+		return false;
+	}
+}
+
+static bool viommu_dev_feature_enabled(struct device *dev,
+				       enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!vdev)
+		return false;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return vdev->iopf_enabled;
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_sva_enabled(vdev);
+	default:
+		return false;
+	}
+}
+
+static int viommu_dev_enable_feature(struct device *dev,
+				     enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!viommu_dev_has_feature(dev, feat))
+		return -ENODEV;
+
+	if (viommu_dev_feature_enabled(dev, feat))
+		return -EBUSY;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_enable_iopf(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_enable_sva(vdev);
+	default:
+		return -EINVAL;
+	}
+}
+
+static int viommu_dev_disable_feature(struct device *dev,
+				      enum iommu_dev_features feat)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (!viommu_dev_feature_enabled(dev, feat))
+		return -EINVAL;
+
+	switch (feat) {
+	case IOMMU_DEV_FEAT_IOPF:
+		return viommu_endpoint_disable_iopf(vdev);
+	case IOMMU_DEV_FEAT_SVA:
+		return viommu_endpoint_disable_sva(vdev);
+	default:
+		return -EINVAL;
+	}
+}
 static struct iommu_ops viommu_ops = {
 	.domain_alloc		= viommu_domain_alloc,
 	.domain_free		= viommu_domain_free,
@@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
 	.get_resv_regions	= viommu_get_resv_regions,
 	.put_resv_regions	= generic_iommu_put_resv_regions,
 	.of_xlate		= viommu_of_xlate,
+	.dev_feat_enabled	= viommu_dev_feature_enabled,
+	.dev_enable_feat	= viommu_dev_enable_feature,
+	.dev_disable_feat	= viommu_dev_disable_feature,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
@@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
 		goto err_free_vqs;
 	}
 
+	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -1902,6 +2169,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_PROBE,
 	VIRTIO_IOMMU_F_MMIO,
 	VIRTIO_IOMMU_F_ATTACH_TABLE,
+	VIRTIO_IOMMU_F_SVA,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 53aa88e6b077..88a3db493108 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -17,6 +17,7 @@
 #define VIRTIO_IOMMU_F_PROBE			4
 #define VIRTIO_IOMMU_F_MMIO			5
 #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
+#define VIRTIO_IOMMU_F_SVA			7
 
 struct virtio_iommu_range_64 {
 	__le64					start;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 06/11] iommu/pasid-table: Add pasid table ops for shared context management
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add pasid table ops to allocate and free shared contexts. These helpers
interact using mmu notifiers, so add a mmu notifier implementation
structure in pasid tables as well. This change can help pull out the
shared pasid (context-descriptor) implementation out of arm-smmu-v3.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      |  1 +
 drivers/iommu/iommu-pasid-table.h             | 42 +++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index fa1a6a632559..ea94f57ad261 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -299,6 +299,7 @@ arm_smmu_register_cd_table(struct device *dev, void *cookie,
 	tbl->cookie = cookie;
 	tbl->ops = &arm_cd_table_ops;
 	tbl->cfg.sync = sync_ops;
+	INIT_LIST_HEAD(&tbl->mmu_notifiers);
 
 	return tbl;
 }
diff --git a/drivers/iommu/iommu-pasid-table.h b/drivers/iommu/iommu-pasid-table.h
index 00b1c66e6a9e..4d6590e60f9b 100644
--- a/drivers/iommu/iommu-pasid-table.h
+++ b/drivers/iommu/iommu-pasid-table.h
@@ -8,6 +8,7 @@
 #define __IOMMU_PASID_TABLE_H
 
 #include <linux/io-pgtable.h>
+#include <linux/mmu_notifier.h>
 
 #include "arm/arm-smmu-v3/arm-smmu-v3.h"
 
@@ -52,6 +53,21 @@ struct iommu_vendor_psdtable_cfg {
 
 struct iommu_vendor_psdtable_ops;
 
+/* In-line with 'struct arm_smmu_mmu_notifier' */
+struct iommu_psdtable_mmu_notifier {
+	struct mmu_notifier		mn;
+	bool				cleared;
+	refcount_t			refs;
+	struct list_head		list;
+	/* cookie captures the domain implementation */
+	void				*cookie;
+	union {
+		struct arm_smmu_ctx_desc	*cd;
+	} vendor;
+};
+#define mn_to_pstiommu(mn)	\
+	container_of(mn, struct iommu_psdtable_mmu_notifier, mn)
+
 /**
  * struct iommu_pasid_table - describes a set of PASID tables
  *
@@ -64,6 +80,7 @@ struct iommu_pasid_table {
 	void					*cookie;
 	struct iommu_vendor_psdtable_cfg	cfg;
 	const struct iommu_vendor_psdtable_ops	*ops;
+	struct list_head			mmu_notifiers;
 };
 
 #define pasid_table_cfg_to_table(pst_cfg) \
@@ -77,6 +94,11 @@ struct iommu_vendor_psdtable_ops {
 	int (*write)(struct iommu_vendor_psdtable_cfg *cfg, int ssid,
 		     void *cookie);
 	bool (*free_asid)(struct xarray *xa, void *cookie_cd);
+	struct iommu_psdtable_mmu_notifier *
+	(*alloc_shared)(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+			struct xarray *xa, u32 asid_bits);
+	void (*free_shared)(struct iommu_pasid_table *tbl, struct xarray *xa,
+			    void *cookie_cd);
 };
 
 /* CD manipulation ops */
@@ -129,6 +151,26 @@ static inline bool iommu_psdtable_free_asid(struct iommu_pasid_table *tbl,
 	return tbl->ops->free_asid(xa, cookie_cd);
 }
 
+static inline struct iommu_psdtable_mmu_notifier *
+iommu_psdtable_alloc_shared(struct iommu_pasid_table *tbl,
+			    struct mm_struct *mm, struct xarray *xa,
+			    u32 asid_bits)
+{
+	if (!tbl->ops->alloc_shared)
+		return false;
+
+	return tbl->ops->alloc_shared(tbl, mm, xa, asid_bits);
+}
+
+static inline void iommu_psdtable_free_shared(struct iommu_pasid_table *tbl,
+					      struct xarray *xa, void *cookie)
+{
+	if (!tbl->ops->free_shared)
+		return;
+
+	tbl->ops->free_shared(tbl, xa, cookie);
+}
+
 /* Sync ops for CD cfg or tlb */
 static inline void iommu_psdtable_flush(struct iommu_pasid_table *tbl,
 					void *cookie, int ssid, bool leaf)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 06/11] iommu/pasid-table: Add pasid table ops for shared context management
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Add pasid table ops to allocate and free shared contexts. These helpers
interact using mmu notifiers, so add a mmu notifier implementation
structure in pasid tables as well. This change can help pull out the
shared pasid (context-descriptor) implementation out of arm-smmu-v3.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      |  1 +
 drivers/iommu/iommu-pasid-table.h             | 42 +++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index fa1a6a632559..ea94f57ad261 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -299,6 +299,7 @@ arm_smmu_register_cd_table(struct device *dev, void *cookie,
 	tbl->cookie = cookie;
 	tbl->ops = &arm_cd_table_ops;
 	tbl->cfg.sync = sync_ops;
+	INIT_LIST_HEAD(&tbl->mmu_notifiers);
 
 	return tbl;
 }
diff --git a/drivers/iommu/iommu-pasid-table.h b/drivers/iommu/iommu-pasid-table.h
index 00b1c66e6a9e..4d6590e60f9b 100644
--- a/drivers/iommu/iommu-pasid-table.h
+++ b/drivers/iommu/iommu-pasid-table.h
@@ -8,6 +8,7 @@
 #define __IOMMU_PASID_TABLE_H
 
 #include <linux/io-pgtable.h>
+#include <linux/mmu_notifier.h>
 
 #include "arm/arm-smmu-v3/arm-smmu-v3.h"
 
@@ -52,6 +53,21 @@ struct iommu_vendor_psdtable_cfg {
 
 struct iommu_vendor_psdtable_ops;
 
+/* In-line with 'struct arm_smmu_mmu_notifier' */
+struct iommu_psdtable_mmu_notifier {
+	struct mmu_notifier		mn;
+	bool				cleared;
+	refcount_t			refs;
+	struct list_head		list;
+	/* cookie captures the domain implementation */
+	void				*cookie;
+	union {
+		struct arm_smmu_ctx_desc	*cd;
+	} vendor;
+};
+#define mn_to_pstiommu(mn)	\
+	container_of(mn, struct iommu_psdtable_mmu_notifier, mn)
+
 /**
  * struct iommu_pasid_table - describes a set of PASID tables
  *
@@ -64,6 +80,7 @@ struct iommu_pasid_table {
 	void					*cookie;
 	struct iommu_vendor_psdtable_cfg	cfg;
 	const struct iommu_vendor_psdtable_ops	*ops;
+	struct list_head			mmu_notifiers;
 };
 
 #define pasid_table_cfg_to_table(pst_cfg) \
@@ -77,6 +94,11 @@ struct iommu_vendor_psdtable_ops {
 	int (*write)(struct iommu_vendor_psdtable_cfg *cfg, int ssid,
 		     void *cookie);
 	bool (*free_asid)(struct xarray *xa, void *cookie_cd);
+	struct iommu_psdtable_mmu_notifier *
+	(*alloc_shared)(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+			struct xarray *xa, u32 asid_bits);
+	void (*free_shared)(struct iommu_pasid_table *tbl, struct xarray *xa,
+			    void *cookie_cd);
 };
 
 /* CD manipulation ops */
@@ -129,6 +151,26 @@ static inline bool iommu_psdtable_free_asid(struct iommu_pasid_table *tbl,
 	return tbl->ops->free_asid(xa, cookie_cd);
 }
 
+static inline struct iommu_psdtable_mmu_notifier *
+iommu_psdtable_alloc_shared(struct iommu_pasid_table *tbl,
+			    struct mm_struct *mm, struct xarray *xa,
+			    u32 asid_bits)
+{
+	if (!tbl->ops->alloc_shared)
+		return false;
+
+	return tbl->ops->alloc_shared(tbl, mm, xa, asid_bits);
+}
+
+static inline void iommu_psdtable_free_shared(struct iommu_pasid_table *tbl,
+					      struct xarray *xa, void *cookie)
+{
+	if (!tbl->ops->free_shared)
+		return;
+
+	tbl->ops->free_shared(tbl, xa, cookie);
+}
+
 /* Sync ops for CD cfg or tlb */
 static inline void iommu_psdtable_flush(struct iommu_pasid_table *tbl,
 					void *cookie, int ssid, bool leaf)
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 06/11] iommu/pasid-table: Add pasid table ops for shared context management
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add pasid table ops to allocate and free shared contexts. These helpers
interact using mmu notifiers, so add a mmu notifier implementation
structure in pasid tables as well. This change can help pull out the
shared pasid (context-descriptor) implementation out of arm-smmu-v3.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      |  1 +
 drivers/iommu/iommu-pasid-table.h             | 42 +++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index fa1a6a632559..ea94f57ad261 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -299,6 +299,7 @@ arm_smmu_register_cd_table(struct device *dev, void *cookie,
 	tbl->cookie = cookie;
 	tbl->ops = &arm_cd_table_ops;
 	tbl->cfg.sync = sync_ops;
+	INIT_LIST_HEAD(&tbl->mmu_notifiers);
 
 	return tbl;
 }
diff --git a/drivers/iommu/iommu-pasid-table.h b/drivers/iommu/iommu-pasid-table.h
index 00b1c66e6a9e..4d6590e60f9b 100644
--- a/drivers/iommu/iommu-pasid-table.h
+++ b/drivers/iommu/iommu-pasid-table.h
@@ -8,6 +8,7 @@
 #define __IOMMU_PASID_TABLE_H
 
 #include <linux/io-pgtable.h>
+#include <linux/mmu_notifier.h>
 
 #include "arm/arm-smmu-v3/arm-smmu-v3.h"
 
@@ -52,6 +53,21 @@ struct iommu_vendor_psdtable_cfg {
 
 struct iommu_vendor_psdtable_ops;
 
+/* In-line with 'struct arm_smmu_mmu_notifier' */
+struct iommu_psdtable_mmu_notifier {
+	struct mmu_notifier		mn;
+	bool				cleared;
+	refcount_t			refs;
+	struct list_head		list;
+	/* cookie captures the domain implementation */
+	void				*cookie;
+	union {
+		struct arm_smmu_ctx_desc	*cd;
+	} vendor;
+};
+#define mn_to_pstiommu(mn)	\
+	container_of(mn, struct iommu_psdtable_mmu_notifier, mn)
+
 /**
  * struct iommu_pasid_table - describes a set of PASID tables
  *
@@ -64,6 +80,7 @@ struct iommu_pasid_table {
 	void					*cookie;
 	struct iommu_vendor_psdtable_cfg	cfg;
 	const struct iommu_vendor_psdtable_ops	*ops;
+	struct list_head			mmu_notifiers;
 };
 
 #define pasid_table_cfg_to_table(pst_cfg) \
@@ -77,6 +94,11 @@ struct iommu_vendor_psdtable_ops {
 	int (*write)(struct iommu_vendor_psdtable_cfg *cfg, int ssid,
 		     void *cookie);
 	bool (*free_asid)(struct xarray *xa, void *cookie_cd);
+	struct iommu_psdtable_mmu_notifier *
+	(*alloc_shared)(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+			struct xarray *xa, u32 asid_bits);
+	void (*free_shared)(struct iommu_pasid_table *tbl, struct xarray *xa,
+			    void *cookie_cd);
 };
 
 /* CD manipulation ops */
@@ -129,6 +151,26 @@ static inline bool iommu_psdtable_free_asid(struct iommu_pasid_table *tbl,
 	return tbl->ops->free_asid(xa, cookie_cd);
 }
 
+static inline struct iommu_psdtable_mmu_notifier *
+iommu_psdtable_alloc_shared(struct iommu_pasid_table *tbl,
+			    struct mm_struct *mm, struct xarray *xa,
+			    u32 asid_bits)
+{
+	if (!tbl->ops->alloc_shared)
+		return false;
+
+	return tbl->ops->alloc_shared(tbl, mm, xa, asid_bits);
+}
+
+static inline void iommu_psdtable_free_shared(struct iommu_pasid_table *tbl,
+					      struct xarray *xa, void *cookie)
+{
+	if (!tbl->ops->free_shared)
+		return;
+
+	tbl->ops->free_shared(tbl, xa, cookie);
+}
+
 /* Sync ops for CD cfg or tlb */
 static inline void iommu_psdtable_flush(struct iommu_pasid_table *tbl,
 					void *cookie, int ssid, bool leaf)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 07/11] iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

In preparation to adding shared descriptor allocation and free pasid
table ops, pull out the alloc_shared_cd and free_shared_cd helpers from
arm-smmu-v3 and place them in arm-smmu-v3-cd-lib library driver.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 149 ++++++++++++++++++
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 147 -----------------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  12 ++
 drivers/iommu/iommu-pasid-table.h             |   1 +
 4 files changed, 162 insertions(+), 147 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index ea94f57ad261..537b7c784d40 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -6,9 +6,11 @@
  */
 
 #include <linux/dma-iommu.h>
+#include <linux/slab.h>
 
 #include "arm-smmu-v3.h"
 #include "../../iommu-pasid-table.h"
+#include "../../io-pgtable-arm.h"
 
 static int arm_smmu_alloc_cd_leaf_table(struct device *dev,
 					struct arm_smmu_l1_ctx_desc *l1_desc)
@@ -278,6 +280,153 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
 	return free;
 }
 
+/*
+ * Check if the CPU ASID is available on the SMMU side. If a private context
+ * descriptor is using it, try to replace it.
+ */
+static struct arm_smmu_ctx_desc *
+arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
+{
+	int ret;
+	u32 new_asid;
+	struct arm_smmu_ctx_desc *cd;
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_domain *smmu_domain;
+	struct iommu_pasid_table *tbl;
+
+	cd = xa_load(&arm_smmu_asid_xa, asid);
+	if (!cd)
+		return NULL;
+
+	if (cd->mm) {
+		if (WARN_ON(cd->mm != mm))
+			return ERR_PTR(-EINVAL);
+		/* All devices bound to this mm use the same cd struct. */
+		refcount_inc(&cd->refs);
+		return cd;
+	}
+
+	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
+	smmu = smmu_domain->smmu;
+	tbl = smmu_domain->tbl;
+
+	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
+		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	if (ret)
+		return ERR_PTR(-ENOSPC);
+	/*
+	 * Race with unmap: TLB invalidations will start targeting the new ASID,
+	 * which isn't assigned yet. We'll do an invalidate-all on the old ASID
+	 * later, so it doesn't matter.
+	 */
+	cd->asid = new_asid;
+	/*
+	 * Update ASID and invalidate CD in all associated masters. There will
+	 * be some overlap between use of both ASIDs, until we invalidate the
+	 * TLB.
+	 */
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
+	if (ret)
+		return ERR_PTR(-ENOSYS);
+
+	/* Invalidate TLB entries previously associated with that context */
+	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
+
+	xa_erase(&arm_smmu_asid_xa, asid);
+	return NULL;
+}
+
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+{
+	u16 asid;
+	int err = 0;
+	u64 tcr, par, reg;
+	struct arm_smmu_ctx_desc *cd;
+	struct arm_smmu_ctx_desc *ret = NULL;
+
+	asid = arm64_mm_context_get(mm);
+	if (!asid)
+		return ERR_PTR(-ESRCH);
+
+	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+	if (!cd) {
+		err = -ENOMEM;
+		goto out_put_context;
+	}
+
+	refcount_set(&cd->refs, 1);
+
+	mutex_lock(&arm_smmu_asid_lock);
+	ret = arm_smmu_share_asid(mm, asid);
+	if (ret) {
+		mutex_unlock(&arm_smmu_asid_lock);
+		goto out_free_cd;
+	}
+
+	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
+	mutex_unlock(&arm_smmu_asid_lock);
+
+	if (err)
+		goto out_free_asid;
+
+	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
+	      CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+
+	switch (PAGE_SIZE) {
+	case SZ_4K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
+		break;
+	case SZ_16K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
+		break;
+	case SZ_64K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
+		break;
+	default:
+		WARN_ON(1);
+		err = -EINVAL;
+		goto out_free_asid;
+	}
+
+	reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+	par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT);
+	tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
+
+	cd->ttbr = virt_to_phys(mm->pgd);
+	cd->tcr = tcr;
+	/*
+	 * MAIR value is pretty much constant and global, so we can just get it
+	 * from the current CPU register
+	 */
+	cd->mair = read_sysreg(mair_el1);
+	cd->asid = asid;
+	cd->mm = mm;
+
+	return cd;
+
+out_free_asid:
+	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
+out_free_cd:
+	kfree(cd);
+out_put_context:
+	arm64_mm_context_put(mm);
+	return err < 0 ? ERR_PTR(err) : ret;
+}
+
+void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+			     struct arm_smmu_ctx_desc *cd)
+{
+	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
+		/* Unpin ASID */
+		arm64_mm_context_put(cd->mm);
+		kfree(cd);
+	}
+}
+
 struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
 	.alloc	 = arm_smmu_alloc_cd_tables,
 	.free	 = arm_smmu_free_cd_tables,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 775c34899935..da35d4cc0c1e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -37,153 +37,6 @@ struct arm_smmu_bond {
 
 static DEFINE_MUTEX(sva_lock);
 
-/*
- * Check if the CPU ASID is available on the SMMU side. If a private context
- * descriptor is using it, try to replace it.
- */
-static struct arm_smmu_ctx_desc *
-arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
-{
-	int ret;
-	u32 new_asid;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_domain *smmu_domain;
-	struct iommu_pasid_table *tbl;
-
-	cd = xa_load(&arm_smmu_asid_xa, asid);
-	if (!cd)
-		return NULL;
-
-	if (cd->mm) {
-		if (WARN_ON(cd->mm != mm))
-			return ERR_PTR(-EINVAL);
-		/* All devices bound to this mm use the same cd struct. */
-		refcount_inc(&cd->refs);
-		return cd;
-	}
-
-	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
-	smmu = smmu_domain->smmu;
-	tbl = smmu_domain->tbl;
-
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
-	if (ret)
-		return ERR_PTR(-ENOSPC);
-	/*
-	 * Race with unmap: TLB invalidations will start targeting the new ASID,
-	 * which isn't assigned yet. We'll do an invalidate-all on the old ASID
-	 * later, so it doesn't matter.
-	 */
-	cd->asid = new_asid;
-	/*
-	 * Update ASID and invalidate CD in all associated masters. There will
-	 * be some overlap between use of both ASIDs, until we invalidate the
-	 * TLB.
-	 */
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
-	if (ret)
-		return ERR_PTR(-ENOSYS);
-
-	/* Invalidate TLB entries previously associated with that context */
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
-
-	xa_erase(&arm_smmu_asid_xa, asid);
-	return NULL;
-}
-
-static struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
-{
-	u16 asid;
-	int err = 0;
-	u64 tcr, par, reg;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_ctx_desc *ret = NULL;
-
-	asid = arm64_mm_context_get(mm);
-	if (!asid)
-		return ERR_PTR(-ESRCH);
-
-	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
-	if (!cd) {
-		err = -ENOMEM;
-		goto out_put_context;
-	}
-
-	refcount_set(&cd->refs, 1);
-
-	mutex_lock(&arm_smmu_asid_lock);
-	ret = arm_smmu_share_asid(mm, asid);
-	if (ret) {
-		mutex_unlock(&arm_smmu_asid_lock);
-		goto out_free_cd;
-	}
-
-	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
-	mutex_unlock(&arm_smmu_asid_lock);
-
-	if (err)
-		goto out_free_asid;
-
-	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
-	      CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
-
-	switch (PAGE_SIZE) {
-	case SZ_4K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
-		break;
-	case SZ_16K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
-		break;
-	case SZ_64K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
-		break;
-	default:
-		WARN_ON(1);
-		err = -EINVAL;
-		goto out_free_asid;
-	}
-
-	reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
-	par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT);
-	tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
-
-	cd->ttbr = virt_to_phys(mm->pgd);
-	cd->tcr = tcr;
-	/*
-	 * MAIR value is pretty much constant and global, so we can just get it
-	 * from the current CPU register
-	 */
-	cd->mair = read_sysreg(mair_el1);
-	cd->asid = asid;
-	cd->mm = mm;
-
-	return cd;
-
-out_free_asid:
-	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
-out_free_cd:
-	kfree(cd);
-out_put_context:
-	arm64_mm_context_put(mm);
-	return err < 0 ? ERR_PTR(err) : ret;
-}
-
-static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-				    struct arm_smmu_ctx_desc *cd)
-{
-	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
-		/* Unpin ASID */
-		arm64_mm_context_put(cd->mm);
-		kfree(cd);
-	}
-}
-
 static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 					 struct mm_struct *mm,
 					 unsigned long start, unsigned long end)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index d5cf2d521c29..13ff024ab0d5 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -773,6 +773,10 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
 void arm_smmu_sva_unbind(struct iommu_sva *handle);
 u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
 void arm_smmu_sva_notifier_synchronize(void);
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
+void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+			     struct arm_smmu_ctx_desc *cd);
 #else /* CONFIG_ARM_SMMU_V3_SVA */
 static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
 {
@@ -828,5 +832,13 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
 }
 
 static inline void arm_smmu_sva_notifier_synchronize(void) {}
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+{
+	return ERR_PTR(-EINVAL);
+}
+
+static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+					   struct arm_smmu_ctx_desc *cd) {}
 #endif /* CONFIG_ARM_SMMU_V3_SVA */
 #endif /* _ARM_SMMU_V3_H */
diff --git a/drivers/iommu/iommu-pasid-table.h b/drivers/iommu/iommu-pasid-table.h
index 4d6590e60f9b..0e774ae6682a 100644
--- a/drivers/iommu/iommu-pasid-table.h
+++ b/drivers/iommu/iommu-pasid-table.h
@@ -8,6 +8,7 @@
 #define __IOMMU_PASID_TABLE_H
 
 #include <linux/io-pgtable.h>
+#include <linux/mmu_context.h>
 #include <linux/mmu_notifier.h>
 
 #include "arm/arm-smmu-v3/arm-smmu-v3.h"
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 07/11] iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

In preparation to adding shared descriptor allocation and free pasid
table ops, pull out the alloc_shared_cd and free_shared_cd helpers from
arm-smmu-v3 and place them in arm-smmu-v3-cd-lib library driver.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 149 ++++++++++++++++++
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 147 -----------------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  12 ++
 drivers/iommu/iommu-pasid-table.h             |   1 +
 4 files changed, 162 insertions(+), 147 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index ea94f57ad261..537b7c784d40 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -6,9 +6,11 @@
  */
 
 #include <linux/dma-iommu.h>
+#include <linux/slab.h>
 
 #include "arm-smmu-v3.h"
 #include "../../iommu-pasid-table.h"
+#include "../../io-pgtable-arm.h"
 
 static int arm_smmu_alloc_cd_leaf_table(struct device *dev,
 					struct arm_smmu_l1_ctx_desc *l1_desc)
@@ -278,6 +280,153 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
 	return free;
 }
 
+/*
+ * Check if the CPU ASID is available on the SMMU side. If a private context
+ * descriptor is using it, try to replace it.
+ */
+static struct arm_smmu_ctx_desc *
+arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
+{
+	int ret;
+	u32 new_asid;
+	struct arm_smmu_ctx_desc *cd;
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_domain *smmu_domain;
+	struct iommu_pasid_table *tbl;
+
+	cd = xa_load(&arm_smmu_asid_xa, asid);
+	if (!cd)
+		return NULL;
+
+	if (cd->mm) {
+		if (WARN_ON(cd->mm != mm))
+			return ERR_PTR(-EINVAL);
+		/* All devices bound to this mm use the same cd struct. */
+		refcount_inc(&cd->refs);
+		return cd;
+	}
+
+	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
+	smmu = smmu_domain->smmu;
+	tbl = smmu_domain->tbl;
+
+	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
+		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	if (ret)
+		return ERR_PTR(-ENOSPC);
+	/*
+	 * Race with unmap: TLB invalidations will start targeting the new ASID,
+	 * which isn't assigned yet. We'll do an invalidate-all on the old ASID
+	 * later, so it doesn't matter.
+	 */
+	cd->asid = new_asid;
+	/*
+	 * Update ASID and invalidate CD in all associated masters. There will
+	 * be some overlap between use of both ASIDs, until we invalidate the
+	 * TLB.
+	 */
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
+	if (ret)
+		return ERR_PTR(-ENOSYS);
+
+	/* Invalidate TLB entries previously associated with that context */
+	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
+
+	xa_erase(&arm_smmu_asid_xa, asid);
+	return NULL;
+}
+
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+{
+	u16 asid;
+	int err = 0;
+	u64 tcr, par, reg;
+	struct arm_smmu_ctx_desc *cd;
+	struct arm_smmu_ctx_desc *ret = NULL;
+
+	asid = arm64_mm_context_get(mm);
+	if (!asid)
+		return ERR_PTR(-ESRCH);
+
+	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+	if (!cd) {
+		err = -ENOMEM;
+		goto out_put_context;
+	}
+
+	refcount_set(&cd->refs, 1);
+
+	mutex_lock(&arm_smmu_asid_lock);
+	ret = arm_smmu_share_asid(mm, asid);
+	if (ret) {
+		mutex_unlock(&arm_smmu_asid_lock);
+		goto out_free_cd;
+	}
+
+	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
+	mutex_unlock(&arm_smmu_asid_lock);
+
+	if (err)
+		goto out_free_asid;
+
+	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
+	      CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+
+	switch (PAGE_SIZE) {
+	case SZ_4K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
+		break;
+	case SZ_16K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
+		break;
+	case SZ_64K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
+		break;
+	default:
+		WARN_ON(1);
+		err = -EINVAL;
+		goto out_free_asid;
+	}
+
+	reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+	par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT);
+	tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
+
+	cd->ttbr = virt_to_phys(mm->pgd);
+	cd->tcr = tcr;
+	/*
+	 * MAIR value is pretty much constant and global, so we can just get it
+	 * from the current CPU register
+	 */
+	cd->mair = read_sysreg(mair_el1);
+	cd->asid = asid;
+	cd->mm = mm;
+
+	return cd;
+
+out_free_asid:
+	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
+out_free_cd:
+	kfree(cd);
+out_put_context:
+	arm64_mm_context_put(mm);
+	return err < 0 ? ERR_PTR(err) : ret;
+}
+
+void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+			     struct arm_smmu_ctx_desc *cd)
+{
+	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
+		/* Unpin ASID */
+		arm64_mm_context_put(cd->mm);
+		kfree(cd);
+	}
+}
+
 struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
 	.alloc	 = arm_smmu_alloc_cd_tables,
 	.free	 = arm_smmu_free_cd_tables,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 775c34899935..da35d4cc0c1e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -37,153 +37,6 @@ struct arm_smmu_bond {
 
 static DEFINE_MUTEX(sva_lock);
 
-/*
- * Check if the CPU ASID is available on the SMMU side. If a private context
- * descriptor is using it, try to replace it.
- */
-static struct arm_smmu_ctx_desc *
-arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
-{
-	int ret;
-	u32 new_asid;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_domain *smmu_domain;
-	struct iommu_pasid_table *tbl;
-
-	cd = xa_load(&arm_smmu_asid_xa, asid);
-	if (!cd)
-		return NULL;
-
-	if (cd->mm) {
-		if (WARN_ON(cd->mm != mm))
-			return ERR_PTR(-EINVAL);
-		/* All devices bound to this mm use the same cd struct. */
-		refcount_inc(&cd->refs);
-		return cd;
-	}
-
-	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
-	smmu = smmu_domain->smmu;
-	tbl = smmu_domain->tbl;
-
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
-	if (ret)
-		return ERR_PTR(-ENOSPC);
-	/*
-	 * Race with unmap: TLB invalidations will start targeting the new ASID,
-	 * which isn't assigned yet. We'll do an invalidate-all on the old ASID
-	 * later, so it doesn't matter.
-	 */
-	cd->asid = new_asid;
-	/*
-	 * Update ASID and invalidate CD in all associated masters. There will
-	 * be some overlap between use of both ASIDs, until we invalidate the
-	 * TLB.
-	 */
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
-	if (ret)
-		return ERR_PTR(-ENOSYS);
-
-	/* Invalidate TLB entries previously associated with that context */
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
-
-	xa_erase(&arm_smmu_asid_xa, asid);
-	return NULL;
-}
-
-static struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
-{
-	u16 asid;
-	int err = 0;
-	u64 tcr, par, reg;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_ctx_desc *ret = NULL;
-
-	asid = arm64_mm_context_get(mm);
-	if (!asid)
-		return ERR_PTR(-ESRCH);
-
-	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
-	if (!cd) {
-		err = -ENOMEM;
-		goto out_put_context;
-	}
-
-	refcount_set(&cd->refs, 1);
-
-	mutex_lock(&arm_smmu_asid_lock);
-	ret = arm_smmu_share_asid(mm, asid);
-	if (ret) {
-		mutex_unlock(&arm_smmu_asid_lock);
-		goto out_free_cd;
-	}
-
-	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
-	mutex_unlock(&arm_smmu_asid_lock);
-
-	if (err)
-		goto out_free_asid;
-
-	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
-	      CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
-
-	switch (PAGE_SIZE) {
-	case SZ_4K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
-		break;
-	case SZ_16K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
-		break;
-	case SZ_64K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
-		break;
-	default:
-		WARN_ON(1);
-		err = -EINVAL;
-		goto out_free_asid;
-	}
-
-	reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
-	par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT);
-	tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
-
-	cd->ttbr = virt_to_phys(mm->pgd);
-	cd->tcr = tcr;
-	/*
-	 * MAIR value is pretty much constant and global, so we can just get it
-	 * from the current CPU register
-	 */
-	cd->mair = read_sysreg(mair_el1);
-	cd->asid = asid;
-	cd->mm = mm;
-
-	return cd;
-
-out_free_asid:
-	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
-out_free_cd:
-	kfree(cd);
-out_put_context:
-	arm64_mm_context_put(mm);
-	return err < 0 ? ERR_PTR(err) : ret;
-}
-
-static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-				    struct arm_smmu_ctx_desc *cd)
-{
-	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
-		/* Unpin ASID */
-		arm64_mm_context_put(cd->mm);
-		kfree(cd);
-	}
-}
-
 static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 					 struct mm_struct *mm,
 					 unsigned long start, unsigned long end)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index d5cf2d521c29..13ff024ab0d5 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -773,6 +773,10 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
 void arm_smmu_sva_unbind(struct iommu_sva *handle);
 u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
 void arm_smmu_sva_notifier_synchronize(void);
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
+void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+			     struct arm_smmu_ctx_desc *cd);
 #else /* CONFIG_ARM_SMMU_V3_SVA */
 static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
 {
@@ -828,5 +832,13 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
 }
 
 static inline void arm_smmu_sva_notifier_synchronize(void) {}
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+{
+	return ERR_PTR(-EINVAL);
+}
+
+static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+					   struct arm_smmu_ctx_desc *cd) {}
 #endif /* CONFIG_ARM_SMMU_V3_SVA */
 #endif /* _ARM_SMMU_V3_H */
diff --git a/drivers/iommu/iommu-pasid-table.h b/drivers/iommu/iommu-pasid-table.h
index 4d6590e60f9b..0e774ae6682a 100644
--- a/drivers/iommu/iommu-pasid-table.h
+++ b/drivers/iommu/iommu-pasid-table.h
@@ -8,6 +8,7 @@
 #define __IOMMU_PASID_TABLE_H
 
 #include <linux/io-pgtable.h>
+#include <linux/mmu_context.h>
 #include <linux/mmu_notifier.h>
 
 #include "arm/arm-smmu-v3/arm-smmu-v3.h"
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 07/11] iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

In preparation to adding shared descriptor allocation and free pasid
table ops, pull out the alloc_shared_cd and free_shared_cd helpers from
arm-smmu-v3 and place them in arm-smmu-v3-cd-lib library driver.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 149 ++++++++++++++++++
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 147 -----------------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  12 ++
 drivers/iommu/iommu-pasid-table.h             |   1 +
 4 files changed, 162 insertions(+), 147 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index ea94f57ad261..537b7c784d40 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -6,9 +6,11 @@
  */
 
 #include <linux/dma-iommu.h>
+#include <linux/slab.h>
 
 #include "arm-smmu-v3.h"
 #include "../../iommu-pasid-table.h"
+#include "../../io-pgtable-arm.h"
 
 static int arm_smmu_alloc_cd_leaf_table(struct device *dev,
 					struct arm_smmu_l1_ctx_desc *l1_desc)
@@ -278,6 +280,153 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
 	return free;
 }
 
+/*
+ * Check if the CPU ASID is available on the SMMU side. If a private context
+ * descriptor is using it, try to replace it.
+ */
+static struct arm_smmu_ctx_desc *
+arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
+{
+	int ret;
+	u32 new_asid;
+	struct arm_smmu_ctx_desc *cd;
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_domain *smmu_domain;
+	struct iommu_pasid_table *tbl;
+
+	cd = xa_load(&arm_smmu_asid_xa, asid);
+	if (!cd)
+		return NULL;
+
+	if (cd->mm) {
+		if (WARN_ON(cd->mm != mm))
+			return ERR_PTR(-EINVAL);
+		/* All devices bound to this mm use the same cd struct. */
+		refcount_inc(&cd->refs);
+		return cd;
+	}
+
+	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
+	smmu = smmu_domain->smmu;
+	tbl = smmu_domain->tbl;
+
+	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
+		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	if (ret)
+		return ERR_PTR(-ENOSPC);
+	/*
+	 * Race with unmap: TLB invalidations will start targeting the new ASID,
+	 * which isn't assigned yet. We'll do an invalidate-all on the old ASID
+	 * later, so it doesn't matter.
+	 */
+	cd->asid = new_asid;
+	/*
+	 * Update ASID and invalidate CD in all associated masters. There will
+	 * be some overlap between use of both ASIDs, until we invalidate the
+	 * TLB.
+	 */
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
+	if (ret)
+		return ERR_PTR(-ENOSYS);
+
+	/* Invalidate TLB entries previously associated with that context */
+	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
+
+	xa_erase(&arm_smmu_asid_xa, asid);
+	return NULL;
+}
+
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+{
+	u16 asid;
+	int err = 0;
+	u64 tcr, par, reg;
+	struct arm_smmu_ctx_desc *cd;
+	struct arm_smmu_ctx_desc *ret = NULL;
+
+	asid = arm64_mm_context_get(mm);
+	if (!asid)
+		return ERR_PTR(-ESRCH);
+
+	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+	if (!cd) {
+		err = -ENOMEM;
+		goto out_put_context;
+	}
+
+	refcount_set(&cd->refs, 1);
+
+	mutex_lock(&arm_smmu_asid_lock);
+	ret = arm_smmu_share_asid(mm, asid);
+	if (ret) {
+		mutex_unlock(&arm_smmu_asid_lock);
+		goto out_free_cd;
+	}
+
+	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
+	mutex_unlock(&arm_smmu_asid_lock);
+
+	if (err)
+		goto out_free_asid;
+
+	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
+	      FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
+	      CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+
+	switch (PAGE_SIZE) {
+	case SZ_4K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
+		break;
+	case SZ_16K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
+		break;
+	case SZ_64K:
+		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
+		break;
+	default:
+		WARN_ON(1);
+		err = -EINVAL;
+		goto out_free_asid;
+	}
+
+	reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+	par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT);
+	tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
+
+	cd->ttbr = virt_to_phys(mm->pgd);
+	cd->tcr = tcr;
+	/*
+	 * MAIR value is pretty much constant and global, so we can just get it
+	 * from the current CPU register
+	 */
+	cd->mair = read_sysreg(mair_el1);
+	cd->asid = asid;
+	cd->mm = mm;
+
+	return cd;
+
+out_free_asid:
+	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
+out_free_cd:
+	kfree(cd);
+out_put_context:
+	arm64_mm_context_put(mm);
+	return err < 0 ? ERR_PTR(err) : ret;
+}
+
+void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+			     struct arm_smmu_ctx_desc *cd)
+{
+	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
+		/* Unpin ASID */
+		arm64_mm_context_put(cd->mm);
+		kfree(cd);
+	}
+}
+
 struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
 	.alloc	 = arm_smmu_alloc_cd_tables,
 	.free	 = arm_smmu_free_cd_tables,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 775c34899935..da35d4cc0c1e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -37,153 +37,6 @@ struct arm_smmu_bond {
 
 static DEFINE_MUTEX(sva_lock);
 
-/*
- * Check if the CPU ASID is available on the SMMU side. If a private context
- * descriptor is using it, try to replace it.
- */
-static struct arm_smmu_ctx_desc *
-arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
-{
-	int ret;
-	u32 new_asid;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_domain *smmu_domain;
-	struct iommu_pasid_table *tbl;
-
-	cd = xa_load(&arm_smmu_asid_xa, asid);
-	if (!cd)
-		return NULL;
-
-	if (cd->mm) {
-		if (WARN_ON(cd->mm != mm))
-			return ERR_PTR(-EINVAL);
-		/* All devices bound to this mm use the same cd struct. */
-		refcount_inc(&cd->refs);
-		return cd;
-	}
-
-	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
-	smmu = smmu_domain->smmu;
-	tbl = smmu_domain->tbl;
-
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
-	if (ret)
-		return ERR_PTR(-ENOSPC);
-	/*
-	 * Race with unmap: TLB invalidations will start targeting the new ASID,
-	 * which isn't assigned yet. We'll do an invalidate-all on the old ASID
-	 * later, so it doesn't matter.
-	 */
-	cd->asid = new_asid;
-	/*
-	 * Update ASID and invalidate CD in all associated masters. There will
-	 * be some overlap between use of both ASIDs, until we invalidate the
-	 * TLB.
-	 */
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
-	if (ret)
-		return ERR_PTR(-ENOSYS);
-
-	/* Invalidate TLB entries previously associated with that context */
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
-
-	xa_erase(&arm_smmu_asid_xa, asid);
-	return NULL;
-}
-
-static struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
-{
-	u16 asid;
-	int err = 0;
-	u64 tcr, par, reg;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_ctx_desc *ret = NULL;
-
-	asid = arm64_mm_context_get(mm);
-	if (!asid)
-		return ERR_PTR(-ESRCH);
-
-	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
-	if (!cd) {
-		err = -ENOMEM;
-		goto out_put_context;
-	}
-
-	refcount_set(&cd->refs, 1);
-
-	mutex_lock(&arm_smmu_asid_lock);
-	ret = arm_smmu_share_asid(mm, asid);
-	if (ret) {
-		mutex_unlock(&arm_smmu_asid_lock);
-		goto out_free_cd;
-	}
-
-	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
-	mutex_unlock(&arm_smmu_asid_lock);
-
-	if (err)
-		goto out_free_asid;
-
-	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
-	      FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
-	      CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
-
-	switch (PAGE_SIZE) {
-	case SZ_4K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
-		break;
-	case SZ_16K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
-		break;
-	case SZ_64K:
-		tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
-		break;
-	default:
-		WARN_ON(1);
-		err = -EINVAL;
-		goto out_free_asid;
-	}
-
-	reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
-	par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT);
-	tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
-
-	cd->ttbr = virt_to_phys(mm->pgd);
-	cd->tcr = tcr;
-	/*
-	 * MAIR value is pretty much constant and global, so we can just get it
-	 * from the current CPU register
-	 */
-	cd->mair = read_sysreg(mair_el1);
-	cd->asid = asid;
-	cd->mm = mm;
-
-	return cd;
-
-out_free_asid:
-	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
-out_free_cd:
-	kfree(cd);
-out_put_context:
-	arm64_mm_context_put(mm);
-	return err < 0 ? ERR_PTR(err) : ret;
-}
-
-static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-				    struct arm_smmu_ctx_desc *cd)
-{
-	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
-		/* Unpin ASID */
-		arm64_mm_context_put(cd->mm);
-		kfree(cd);
-	}
-}
-
 static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 					 struct mm_struct *mm,
 					 unsigned long start, unsigned long end)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index d5cf2d521c29..13ff024ab0d5 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -773,6 +773,10 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
 void arm_smmu_sva_unbind(struct iommu_sva *handle);
 u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
 void arm_smmu_sva_notifier_synchronize(void);
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
+void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+			     struct arm_smmu_ctx_desc *cd);
 #else /* CONFIG_ARM_SMMU_V3_SVA */
 static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
 {
@@ -828,5 +832,13 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
 }
 
 static inline void arm_smmu_sva_notifier_synchronize(void) {}
+struct arm_smmu_ctx_desc *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+{
+	return ERR_PTR(-EINVAL);
+}
+
+static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+					   struct arm_smmu_ctx_desc *cd) {}
 #endif /* CONFIG_ARM_SMMU_V3_SVA */
 #endif /* _ARM_SMMU_V3_H */
diff --git a/drivers/iommu/iommu-pasid-table.h b/drivers/iommu/iommu-pasid-table.h
index 4d6590e60f9b..0e774ae6682a 100644
--- a/drivers/iommu/iommu-pasid-table.h
+++ b/drivers/iommu/iommu-pasid-table.h
@@ -8,6 +8,7 @@
 #define __IOMMU_PASID_TABLE_H
 
 #include <linux/io-pgtable.h>
+#include <linux/mmu_context.h>
 #include <linux/mmu_notifier.h>
 
 #include "arm/arm-smmu-v3/arm-smmu-v3.h"
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
start using them for arm-smmu-v3-sva implementation.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
 4 files changed, 73 insertions(+), 96 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index 537b7c784d40..b87829796596 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
  * descriptor is using it, try to replace it.
  */
 static struct arm_smmu_ctx_desc *
-arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
+arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+		    struct xarray *xa, u16 asid, u32 asid_bits)
 {
 	int ret;
 	u32 new_asid;
 	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_domain *smmu_domain;
-	struct iommu_pasid_table *tbl;
 
-	cd = xa_load(&arm_smmu_asid_xa, asid);
+	cd = xa_load(xa, asid);
 	if (!cd)
 		return NULL;
 
@@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 		return cd;
 	}
 
-	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
-	smmu = smmu_domain->smmu;
-	tbl = smmu_domain->tbl;
-
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
+		       GFP_KERNEL);
 	if (ret)
 		return ERR_PTR(-ENOSPC);
 	/*
@@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 	 * be some overlap between use of both ASIDs, until we invalidate the
 	 * TLB.
 	 */
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
+	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
 	if (ret)
 		return ERR_PTR(-ENOSYS);
 
 	/* Invalidate TLB entries previously associated with that context */
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
+	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
 
-	xa_erase(&arm_smmu_asid_xa, asid);
+	xa_erase(xa, asid);
 	return NULL;
 }
 
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+static struct iommu_psdtable_mmu_notifier *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+			 struct xarray *xa, u32 asid_bits)
 {
 	u16 asid;
 	int err = 0;
 	u64 tcr, par, reg;
 	struct arm_smmu_ctx_desc *cd;
 	struct arm_smmu_ctx_desc *ret = NULL;
+	struct iommu_psdtable_mmu_notifier *pst_mn;
 
 	asid = arm64_mm_context_get(mm);
 	if (!asid)
 		return ERR_PTR(-ESRCH);
 
+	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
+	if (!pst_mn) {
+		err = -ENOMEM;
+		goto out_put_context;
+	}
+
 	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
 	if (!cd) {
 		err = -ENOMEM;
-		goto out_put_context;
+		goto out_free_mn;
 	}
 
 	refcount_set(&cd->refs, 1);
 
-	mutex_lock(&arm_smmu_asid_lock);
-	ret = arm_smmu_share_asid(mm, asid);
+	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
 	if (ret) {
-		mutex_unlock(&arm_smmu_asid_lock);
 		goto out_free_cd;
 	}
 
-	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
-	mutex_unlock(&arm_smmu_asid_lock);
-
+	err = xa_insert(xa, asid, cd, GFP_KERNEL);
 	if (err)
 		goto out_free_asid;
 
@@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
 	cd->asid = asid;
 	cd->mm = mm;
 
-	return cd;
+	pst_mn->vendor.cd = cd;
+	return pst_mn;
 
 out_free_asid:
-	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
+	arm_smmu_free_asid(xa, cd);
 out_free_cd:
 	kfree(cd);
+out_free_mn:
+	kfree(pst_mn);
 out_put_context:
 	arm64_mm_context_put(mm);
 	return err < 0 ? ERR_PTR(err) : ret;
 }
 
-void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-			     struct arm_smmu_ctx_desc *cd)
+static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+				    struct xarray *xa, void *cookie)
 {
-	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
+	struct arm_smmu_ctx_desc *cd = cookie;
+
+	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
 		/* Unpin ASID */
 		arm64_mm_context_put(cd->mm);
 		kfree(cd);
@@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
 }
 
 struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
-	.alloc	 = arm_smmu_alloc_cd_tables,
-	.free	 = arm_smmu_free_cd_tables,
-	.prepare = arm_smmu_prepare_cd,
-	.write	 = arm_smmu_write_ctx_desc,
-	.free_asid = arm_smmu_free_asid,
+	.alloc		= arm_smmu_alloc_cd_tables,
+	.free		= arm_smmu_free_cd_tables,
+	.prepare	= arm_smmu_prepare_cd,
+	.write		= arm_smmu_write_ctx_desc,
+	.free_asid	= arm_smmu_free_asid,
+	.alloc_shared	= arm_smmu_alloc_shared_cd,
+	.free_shared	= arm_smmu_free_shared_cd,
 };
 
 struct iommu_pasid_table *
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index da35d4cc0c1e..ef28d0c409da 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -13,23 +13,12 @@
 #include "../../io-pgtable-arm.h"
 #include "../../iommu-pasid-table.h"
 
-struct arm_smmu_mmu_notifier {
-	struct mmu_notifier		mn;
-	struct arm_smmu_ctx_desc	*cd;
-	bool				cleared;
-	refcount_t			refs;
-	struct list_head		list;
-	struct arm_smmu_domain		*domain;
-};
-
-#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
-
 struct arm_smmu_bond {
-	struct iommu_sva		sva;
-	struct mm_struct		*mm;
-	struct arm_smmu_mmu_notifier	*smmu_mn;
-	struct list_head		list;
-	refcount_t			refs;
+	struct iommu_sva			sva;
+	struct mm_struct			*mm;
+	struct iommu_psdtable_mmu_notifier	*smmu_mn;
+	struct list_head			list;
+	refcount_t				refs;
 };
 
 #define sva_to_bond(handle) \
@@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 					 struct mm_struct *mm,
 					 unsigned long start, unsigned long end)
 {
-	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
 	size_t size = end - start + 1;
 
 	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
-		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
+		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
 					    PAGE_SIZE, false, smmu_domain);
 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
 }
 
 static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 {
-	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
 	mutex_lock(&sva_lock);
@@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 	 */
 	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);
 
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
+	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);
 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
 
 	smmu_mn->cleared = true;
@@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 
 static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
 {
-	kfree(mn_to_smmu(mn));
+	kfree(mn_to_pstiommu(mn));
 }
 
 static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
@@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
 };
 
 /* Allocate or get existing MMU notifier for this {domain, mm} pair */
-static struct arm_smmu_mmu_notifier *
+static struct iommu_psdtable_mmu_notifier *
 arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
 			  struct mm_struct *mm)
 {
 	int ret;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_mmu_notifier *smmu_mn;
+	struct iommu_psdtable_mmu_notifier *smmu_mn;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
-	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
+	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
 		if (smmu_mn->mn.mm == mm) {
 			refcount_inc(&smmu_mn->refs);
 			return smmu_mn;
 		}
 	}
 
-	cd = arm_smmu_alloc_shared_cd(tbl, mm);
-	if (IS_ERR(cd))
-		return ERR_CAST(cd);
-
-	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
-	if (!smmu_mn) {
-		ret = -ENOMEM;
-		goto err_free_cd;
-	}
+	mutex_lock(&arm_smmu_asid_lock);
+	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
+					      smmu->asid_bits);
+	mutex_unlock(&arm_smmu_asid_lock);
+	if (IS_ERR(smmu_mn))
+		return ERR_CAST(smmu_mn);
 
 	refcount_set(&smmu_mn->refs, 1);
-	smmu_mn->cd = cd;
-	smmu_mn->domain = smmu_domain;
+	smmu_mn->cookie = smmu_domain;
 	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
 
 	ret = mmu_notifier_register(&smmu_mn->mn, mm);
-	if (ret) {
-		kfree(smmu_mn);
+	if (ret)
 		goto err_free_cd;
-	}
 
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
+				   smmu_mn->vendor.cd);
 	if (ret)
 		goto err_put_notifier;
 
-	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
+	list_add(&smmu_mn->list, &tbl->mmu_notifiers);
 	return smmu_mn;
 
 err_put_notifier:
 	/* Frees smmu_mn */
 	mmu_notifier_put(&smmu_mn->mn);
 err_free_cd:
-	arm_smmu_free_shared_cd(tbl, cd);
+	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
 	return ERR_PTR(ret);
 }
 
-static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
+static void
+arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
 {
 	struct mm_struct *mm = smmu_mn->mn.mm;
-	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
 	if (!refcount_dec_and_test(&smmu_mn->refs))
@@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
 
 	/* Frees smmu_mn */
 	mmu_notifier_put(&smmu_mn->mn);
-	arm_smmu_free_shared_cd(tbl, cd);
+	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
 }
 
 static struct iommu_sva *
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index bbc4dc75de72..e55567b4d2f4 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
 	mutex_init(&smmu_domain->init_mutex);
 	INIT_LIST_HEAD(&smmu_domain->devices);
 	spin_lock_init(&smmu_domain->devices_lock);
-	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
 
 	return &smmu_domain->domain;
 }
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 13ff024ab0d5..8a689b4316ac 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -737,8 +737,6 @@ struct arm_smmu_domain {
 
 	struct list_head		devices;
 	spinlock_t			devices_lock;
-
-	struct list_head		mmu_notifiers;
 };
 
 static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
@@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
 void arm_smmu_sva_unbind(struct iommu_sva *handle);
 u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
 void arm_smmu_sva_notifier_synchronize(void);
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
-void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-			     struct arm_smmu_ctx_desc *cd);
 #else /* CONFIG_ARM_SMMU_V3_SVA */
 static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
 {
@@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
 }
 
 static inline void arm_smmu_sva_notifier_synchronize(void) {}
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
-{
-	return ERR_PTR(-EINVAL);
-}
-
-static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-					   struct arm_smmu_ctx_desc *cd) {}
 #endif /* CONFIG_ARM_SMMU_V3_SVA */
 #endif /* _ARM_SMMU_V3_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
start using them for arm-smmu-v3-sva implementation.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
 4 files changed, 73 insertions(+), 96 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index 537b7c784d40..b87829796596 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
  * descriptor is using it, try to replace it.
  */
 static struct arm_smmu_ctx_desc *
-arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
+arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+		    struct xarray *xa, u16 asid, u32 asid_bits)
 {
 	int ret;
 	u32 new_asid;
 	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_domain *smmu_domain;
-	struct iommu_pasid_table *tbl;
 
-	cd = xa_load(&arm_smmu_asid_xa, asid);
+	cd = xa_load(xa, asid);
 	if (!cd)
 		return NULL;
 
@@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 		return cd;
 	}
 
-	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
-	smmu = smmu_domain->smmu;
-	tbl = smmu_domain->tbl;
-
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
+		       GFP_KERNEL);
 	if (ret)
 		return ERR_PTR(-ENOSPC);
 	/*
@@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 	 * be some overlap between use of both ASIDs, until we invalidate the
 	 * TLB.
 	 */
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
+	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
 	if (ret)
 		return ERR_PTR(-ENOSYS);
 
 	/* Invalidate TLB entries previously associated with that context */
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
+	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
 
-	xa_erase(&arm_smmu_asid_xa, asid);
+	xa_erase(xa, asid);
 	return NULL;
 }
 
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+static struct iommu_psdtable_mmu_notifier *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+			 struct xarray *xa, u32 asid_bits)
 {
 	u16 asid;
 	int err = 0;
 	u64 tcr, par, reg;
 	struct arm_smmu_ctx_desc *cd;
 	struct arm_smmu_ctx_desc *ret = NULL;
+	struct iommu_psdtable_mmu_notifier *pst_mn;
 
 	asid = arm64_mm_context_get(mm);
 	if (!asid)
 		return ERR_PTR(-ESRCH);
 
+	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
+	if (!pst_mn) {
+		err = -ENOMEM;
+		goto out_put_context;
+	}
+
 	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
 	if (!cd) {
 		err = -ENOMEM;
-		goto out_put_context;
+		goto out_free_mn;
 	}
 
 	refcount_set(&cd->refs, 1);
 
-	mutex_lock(&arm_smmu_asid_lock);
-	ret = arm_smmu_share_asid(mm, asid);
+	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
 	if (ret) {
-		mutex_unlock(&arm_smmu_asid_lock);
 		goto out_free_cd;
 	}
 
-	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
-	mutex_unlock(&arm_smmu_asid_lock);
-
+	err = xa_insert(xa, asid, cd, GFP_KERNEL);
 	if (err)
 		goto out_free_asid;
 
@@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
 	cd->asid = asid;
 	cd->mm = mm;
 
-	return cd;
+	pst_mn->vendor.cd = cd;
+	return pst_mn;
 
 out_free_asid:
-	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
+	arm_smmu_free_asid(xa, cd);
 out_free_cd:
 	kfree(cd);
+out_free_mn:
+	kfree(pst_mn);
 out_put_context:
 	arm64_mm_context_put(mm);
 	return err < 0 ? ERR_PTR(err) : ret;
 }
 
-void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-			     struct arm_smmu_ctx_desc *cd)
+static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+				    struct xarray *xa, void *cookie)
 {
-	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
+	struct arm_smmu_ctx_desc *cd = cookie;
+
+	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
 		/* Unpin ASID */
 		arm64_mm_context_put(cd->mm);
 		kfree(cd);
@@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
 }
 
 struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
-	.alloc	 = arm_smmu_alloc_cd_tables,
-	.free	 = arm_smmu_free_cd_tables,
-	.prepare = arm_smmu_prepare_cd,
-	.write	 = arm_smmu_write_ctx_desc,
-	.free_asid = arm_smmu_free_asid,
+	.alloc		= arm_smmu_alloc_cd_tables,
+	.free		= arm_smmu_free_cd_tables,
+	.prepare	= arm_smmu_prepare_cd,
+	.write		= arm_smmu_write_ctx_desc,
+	.free_asid	= arm_smmu_free_asid,
+	.alloc_shared	= arm_smmu_alloc_shared_cd,
+	.free_shared	= arm_smmu_free_shared_cd,
 };
 
 struct iommu_pasid_table *
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index da35d4cc0c1e..ef28d0c409da 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -13,23 +13,12 @@
 #include "../../io-pgtable-arm.h"
 #include "../../iommu-pasid-table.h"
 
-struct arm_smmu_mmu_notifier {
-	struct mmu_notifier		mn;
-	struct arm_smmu_ctx_desc	*cd;
-	bool				cleared;
-	refcount_t			refs;
-	struct list_head		list;
-	struct arm_smmu_domain		*domain;
-};
-
-#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
-
 struct arm_smmu_bond {
-	struct iommu_sva		sva;
-	struct mm_struct		*mm;
-	struct arm_smmu_mmu_notifier	*smmu_mn;
-	struct list_head		list;
-	refcount_t			refs;
+	struct iommu_sva			sva;
+	struct mm_struct			*mm;
+	struct iommu_psdtable_mmu_notifier	*smmu_mn;
+	struct list_head			list;
+	refcount_t				refs;
 };
 
 #define sva_to_bond(handle) \
@@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 					 struct mm_struct *mm,
 					 unsigned long start, unsigned long end)
 {
-	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
 	size_t size = end - start + 1;
 
 	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
-		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
+		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
 					    PAGE_SIZE, false, smmu_domain);
 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
 }
 
 static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 {
-	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
 	mutex_lock(&sva_lock);
@@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 	 */
 	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);
 
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
+	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);
 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
 
 	smmu_mn->cleared = true;
@@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 
 static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
 {
-	kfree(mn_to_smmu(mn));
+	kfree(mn_to_pstiommu(mn));
 }
 
 static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
@@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
 };
 
 /* Allocate or get existing MMU notifier for this {domain, mm} pair */
-static struct arm_smmu_mmu_notifier *
+static struct iommu_psdtable_mmu_notifier *
 arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
 			  struct mm_struct *mm)
 {
 	int ret;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_mmu_notifier *smmu_mn;
+	struct iommu_psdtable_mmu_notifier *smmu_mn;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
-	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
+	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
 		if (smmu_mn->mn.mm == mm) {
 			refcount_inc(&smmu_mn->refs);
 			return smmu_mn;
 		}
 	}
 
-	cd = arm_smmu_alloc_shared_cd(tbl, mm);
-	if (IS_ERR(cd))
-		return ERR_CAST(cd);
-
-	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
-	if (!smmu_mn) {
-		ret = -ENOMEM;
-		goto err_free_cd;
-	}
+	mutex_lock(&arm_smmu_asid_lock);
+	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
+					      smmu->asid_bits);
+	mutex_unlock(&arm_smmu_asid_lock);
+	if (IS_ERR(smmu_mn))
+		return ERR_CAST(smmu_mn);
 
 	refcount_set(&smmu_mn->refs, 1);
-	smmu_mn->cd = cd;
-	smmu_mn->domain = smmu_domain;
+	smmu_mn->cookie = smmu_domain;
 	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
 
 	ret = mmu_notifier_register(&smmu_mn->mn, mm);
-	if (ret) {
-		kfree(smmu_mn);
+	if (ret)
 		goto err_free_cd;
-	}
 
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
+				   smmu_mn->vendor.cd);
 	if (ret)
 		goto err_put_notifier;
 
-	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
+	list_add(&smmu_mn->list, &tbl->mmu_notifiers);
 	return smmu_mn;
 
 err_put_notifier:
 	/* Frees smmu_mn */
 	mmu_notifier_put(&smmu_mn->mn);
 err_free_cd:
-	arm_smmu_free_shared_cd(tbl, cd);
+	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
 	return ERR_PTR(ret);
 }
 
-static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
+static void
+arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
 {
 	struct mm_struct *mm = smmu_mn->mn.mm;
-	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
 	if (!refcount_dec_and_test(&smmu_mn->refs))
@@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
 
 	/* Frees smmu_mn */
 	mmu_notifier_put(&smmu_mn->mn);
-	arm_smmu_free_shared_cd(tbl, cd);
+	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
 }
 
 static struct iommu_sva *
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index bbc4dc75de72..e55567b4d2f4 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
 	mutex_init(&smmu_domain->init_mutex);
 	INIT_LIST_HEAD(&smmu_domain->devices);
 	spin_lock_init(&smmu_domain->devices_lock);
-	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
 
 	return &smmu_domain->domain;
 }
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 13ff024ab0d5..8a689b4316ac 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -737,8 +737,6 @@ struct arm_smmu_domain {
 
 	struct list_head		devices;
 	spinlock_t			devices_lock;
-
-	struct list_head		mmu_notifiers;
 };
 
 static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
@@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
 void arm_smmu_sva_unbind(struct iommu_sva *handle);
 u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
 void arm_smmu_sva_notifier_synchronize(void);
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
-void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-			     struct arm_smmu_ctx_desc *cd);
 #else /* CONFIG_ARM_SMMU_V3_SVA */
 static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
 {
@@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
 }
 
 static inline void arm_smmu_sva_notifier_synchronize(void) {}
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
-{
-	return ERR_PTR(-EINVAL);
-}
-
-static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-					   struct arm_smmu_ctx_desc *cd) {}
 #endif /* CONFIG_ARM_SMMU_V3_SVA */
 #endif /* _ARM_SMMU_V3_H */
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
start using them for arm-smmu-v3-sva implementation.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
 4 files changed, 73 insertions(+), 96 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
index 537b7c784d40..b87829796596 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
@@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
  * descriptor is using it, try to replace it.
  */
 static struct arm_smmu_ctx_desc *
-arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
+arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+		    struct xarray *xa, u16 asid, u32 asid_bits)
 {
 	int ret;
 	u32 new_asid;
 	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_domain *smmu_domain;
-	struct iommu_pasid_table *tbl;
 
-	cd = xa_load(&arm_smmu_asid_xa, asid);
+	cd = xa_load(xa, asid);
 	if (!cd)
 		return NULL;
 
@@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 		return cd;
 	}
 
-	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
-	smmu = smmu_domain->smmu;
-	tbl = smmu_domain->tbl;
-
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
+		       GFP_KERNEL);
 	if (ret)
 		return ERR_PTR(-ENOSPC);
 	/*
@@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 	 * be some overlap between use of both ASIDs, until we invalidate the
 	 * TLB.
 	 */
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
+	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
 	if (ret)
 		return ERR_PTR(-ENOSYS);
 
 	/* Invalidate TLB entries previously associated with that context */
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
+	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
 
-	xa_erase(&arm_smmu_asid_xa, asid);
+	xa_erase(xa, asid);
 	return NULL;
 }
 
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
+static struct iommu_psdtable_mmu_notifier *
+arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
+			 struct xarray *xa, u32 asid_bits)
 {
 	u16 asid;
 	int err = 0;
 	u64 tcr, par, reg;
 	struct arm_smmu_ctx_desc *cd;
 	struct arm_smmu_ctx_desc *ret = NULL;
+	struct iommu_psdtable_mmu_notifier *pst_mn;
 
 	asid = arm64_mm_context_get(mm);
 	if (!asid)
 		return ERR_PTR(-ESRCH);
 
+	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
+	if (!pst_mn) {
+		err = -ENOMEM;
+		goto out_put_context;
+	}
+
 	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
 	if (!cd) {
 		err = -ENOMEM;
-		goto out_put_context;
+		goto out_free_mn;
 	}
 
 	refcount_set(&cd->refs, 1);
 
-	mutex_lock(&arm_smmu_asid_lock);
-	ret = arm_smmu_share_asid(mm, asid);
+	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
 	if (ret) {
-		mutex_unlock(&arm_smmu_asid_lock);
 		goto out_free_cd;
 	}
 
-	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
-	mutex_unlock(&arm_smmu_asid_lock);
-
+	err = xa_insert(xa, asid, cd, GFP_KERNEL);
 	if (err)
 		goto out_free_asid;
 
@@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
 	cd->asid = asid;
 	cd->mm = mm;
 
-	return cd;
+	pst_mn->vendor.cd = cd;
+	return pst_mn;
 
 out_free_asid:
-	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
+	arm_smmu_free_asid(xa, cd);
 out_free_cd:
 	kfree(cd);
+out_free_mn:
+	kfree(pst_mn);
 out_put_context:
 	arm64_mm_context_put(mm);
 	return err < 0 ? ERR_PTR(err) : ret;
 }
 
-void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-			     struct arm_smmu_ctx_desc *cd)
+static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
+				    struct xarray *xa, void *cookie)
 {
-	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
+	struct arm_smmu_ctx_desc *cd = cookie;
+
+	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
 		/* Unpin ASID */
 		arm64_mm_context_put(cd->mm);
 		kfree(cd);
@@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
 }
 
 struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
-	.alloc	 = arm_smmu_alloc_cd_tables,
-	.free	 = arm_smmu_free_cd_tables,
-	.prepare = arm_smmu_prepare_cd,
-	.write	 = arm_smmu_write_ctx_desc,
-	.free_asid = arm_smmu_free_asid,
+	.alloc		= arm_smmu_alloc_cd_tables,
+	.free		= arm_smmu_free_cd_tables,
+	.prepare	= arm_smmu_prepare_cd,
+	.write		= arm_smmu_write_ctx_desc,
+	.free_asid	= arm_smmu_free_asid,
+	.alloc_shared	= arm_smmu_alloc_shared_cd,
+	.free_shared	= arm_smmu_free_shared_cd,
 };
 
 struct iommu_pasid_table *
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index da35d4cc0c1e..ef28d0c409da 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -13,23 +13,12 @@
 #include "../../io-pgtable-arm.h"
 #include "../../iommu-pasid-table.h"
 
-struct arm_smmu_mmu_notifier {
-	struct mmu_notifier		mn;
-	struct arm_smmu_ctx_desc	*cd;
-	bool				cleared;
-	refcount_t			refs;
-	struct list_head		list;
-	struct arm_smmu_domain		*domain;
-};
-
-#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
-
 struct arm_smmu_bond {
-	struct iommu_sva		sva;
-	struct mm_struct		*mm;
-	struct arm_smmu_mmu_notifier	*smmu_mn;
-	struct list_head		list;
-	refcount_t			refs;
+	struct iommu_sva			sva;
+	struct mm_struct			*mm;
+	struct iommu_psdtable_mmu_notifier	*smmu_mn;
+	struct list_head			list;
+	refcount_t				refs;
 };
 
 #define sva_to_bond(handle) \
@@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 					 struct mm_struct *mm,
 					 unsigned long start, unsigned long end)
 {
-	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
 	size_t size = end - start + 1;
 
 	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
-		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
+		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
 					    PAGE_SIZE, false, smmu_domain);
 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
 }
 
 static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 {
-	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
 	mutex_lock(&sva_lock);
@@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 	 */
 	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);
 
-	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
+	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);
 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
 
 	smmu_mn->cleared = true;
@@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 
 static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
 {
-	kfree(mn_to_smmu(mn));
+	kfree(mn_to_pstiommu(mn));
 }
 
 static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
@@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
 };
 
 /* Allocate or get existing MMU notifier for this {domain, mm} pair */
-static struct arm_smmu_mmu_notifier *
+static struct iommu_psdtable_mmu_notifier *
 arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
 			  struct mm_struct *mm)
 {
 	int ret;
-	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_mmu_notifier *smmu_mn;
+	struct iommu_psdtable_mmu_notifier *smmu_mn;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
-	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
+	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
 		if (smmu_mn->mn.mm == mm) {
 			refcount_inc(&smmu_mn->refs);
 			return smmu_mn;
 		}
 	}
 
-	cd = arm_smmu_alloc_shared_cd(tbl, mm);
-	if (IS_ERR(cd))
-		return ERR_CAST(cd);
-
-	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
-	if (!smmu_mn) {
-		ret = -ENOMEM;
-		goto err_free_cd;
-	}
+	mutex_lock(&arm_smmu_asid_lock);
+	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
+					      smmu->asid_bits);
+	mutex_unlock(&arm_smmu_asid_lock);
+	if (IS_ERR(smmu_mn))
+		return ERR_CAST(smmu_mn);
 
 	refcount_set(&smmu_mn->refs, 1);
-	smmu_mn->cd = cd;
-	smmu_mn->domain = smmu_domain;
+	smmu_mn->cookie = smmu_domain;
 	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
 
 	ret = mmu_notifier_register(&smmu_mn->mn, mm);
-	if (ret) {
-		kfree(smmu_mn);
+	if (ret)
 		goto err_free_cd;
-	}
 
-	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
+				   smmu_mn->vendor.cd);
 	if (ret)
 		goto err_put_notifier;
 
-	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
+	list_add(&smmu_mn->list, &tbl->mmu_notifiers);
 	return smmu_mn;
 
 err_put_notifier:
 	/* Frees smmu_mn */
 	mmu_notifier_put(&smmu_mn->mn);
 err_free_cd:
-	arm_smmu_free_shared_cd(tbl, cd);
+	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
 	return ERR_PTR(ret);
 }
 
-static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
+static void
+arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
 {
 	struct mm_struct *mm = smmu_mn->mn.mm;
-	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
-	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
+	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
 	struct iommu_pasid_table *tbl = smmu_domain->tbl;
 
 	if (!refcount_dec_and_test(&smmu_mn->refs))
@@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
 
 	/* Frees smmu_mn */
 	mmu_notifier_put(&smmu_mn->mn);
-	arm_smmu_free_shared_cd(tbl, cd);
+	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
 }
 
 static struct iommu_sva *
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index bbc4dc75de72..e55567b4d2f4 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
 	mutex_init(&smmu_domain->init_mutex);
 	INIT_LIST_HEAD(&smmu_domain->devices);
 	spin_lock_init(&smmu_domain->devices_lock);
-	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
 
 	return &smmu_domain->domain;
 }
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 13ff024ab0d5..8a689b4316ac 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -737,8 +737,6 @@ struct arm_smmu_domain {
 
 	struct list_head		devices;
 	spinlock_t			devices_lock;
-
-	struct list_head		mmu_notifiers;
 };
 
 static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
@@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
 void arm_smmu_sva_unbind(struct iommu_sva *handle);
 u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
 void arm_smmu_sva_notifier_synchronize(void);
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
-void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-			     struct arm_smmu_ctx_desc *cd);
 #else /* CONFIG_ARM_SMMU_V3_SVA */
 static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
 {
@@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
 }
 
 static inline void arm_smmu_sva_notifier_synchronize(void) {}
-struct arm_smmu_ctx_desc *
-arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
-{
-	return ERR_PTR(-EINVAL);
-}
-
-static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
-					   struct arm_smmu_ctx_desc *cd) {}
 #endif /* CONFIG_ARM_SMMU_V3_SVA */
 #endif /* _ARM_SMMU_V3_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

SVA bind and unbind implementations will allow to prepare translation
context with CPU page tables that can be programmed into host iommu
hardware to realize shared address space utilization between the CPU
and virtualized devices using virtio-iommu.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_iommu.h |   2 +
 2 files changed, 199 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 250c137a211b..08f1294baeab 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -14,6 +14,9 @@
 #include <linux/interval_tree.h>
 #include <linux/iommu.h>
 #include <linux/io-pgtable.h>
+#include <linux/mm.h>
+#include <linux/mmu_context.h>
+#include <linux/mmu_notifier.h>
 #include <linux/module.h>
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
@@ -28,6 +31,7 @@
 #include <uapi/linux/virtio_iommu.h>
 #include "iommu-pasid-table.h"
 #include "iommu-sva-lib.h"
+#include "io-pgtable-arm.h"
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
 static DEFINE_MUTEX(sva_lock);
 static DEFINE_MUTEX(iopf_lock);
+static DEFINE_MUTEX(viommu_asid_lock);
 
 struct viommu_dev_pri_work {
 	struct work_struct		work;
@@ -88,10 +93,22 @@ struct viommu_mapping {
 struct viommu_mm {
 	int				pasid;
 	u64				archid;
+	struct viommu_sva_bond		*bond;
 	struct io_pgtable_ops		*ops;
 	struct viommu_domain		*domain;
 };
 
+struct viommu_sva_bond {
+	struct iommu_sva		sva;
+	struct mm_struct		*mm;
+	struct iommu_psdtable_mmu_notifier	*viommu_mn;
+	struct list_head		list;
+	refcount_t			refs;
+};
+
+#define sva_to_viommu_bond(handle) \
+	container_of(handle, struct viommu_sva_bond, sva)
+
 struct viommu_domain {
 	struct iommu_domain		domain;
 	struct viommu_dev		*viommu;
@@ -136,6 +153,7 @@ struct viommu_endpoint {
 	bool				pri_supported;
 	bool				sva_enabled;
 	bool				iopf_enabled;
+	struct list_head		bonds;
 };
 
 struct viommu_ep_entry {
@@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 
 		pst_cfg->iommu_dev = viommu->dev->parent;
 
+		mutex_lock(&viommu_asid_lock);
 		/* Prepare PASID tables info to allocate a new table */
 		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
 		if (ret)
-			return ret;
+			goto err_out_unlock;
 
 		ret = iommu_psdtable_alloc(tbl, pst_cfg);
 		if (ret)
-			return ret;
+			goto err_out_unlock;
 
 		pst_cfg->iommu_dev = viommu->dev->parent;
 		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
@@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 			if (ret)
 				goto err_free_ops;
 		}
+		mutex_unlock(&viommu_asid_lock);
 	} else {
 		/* TODO: otherwise, check for compatibility with vdev. */
 		return -ENOSYS;
@@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 err_free_psdtable:
 	iommu_psdtable_free(tbl, &tbl->cfg);
 
+err_out_unlock:
+	mutex_unlock(&viommu_asid_lock);
 	return ret;
 }
 
@@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
 	vdev->dev = dev;
 	vdev->viommu = viommu;
 	INIT_LIST_HEAD(&vdev->resv_regions);
+	INIT_LIST_HEAD(&vdev->bonds);
 	dev_iommu_priv_set(dev, vdev);
 
 	if (viommu->probe_size) {
@@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
+{
+	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
+
+	return bond->mm->pasid;
+}
+
+static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
+{
+	kfree(mn_to_pstiommu(mn));
+}
+
+static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
+	.free_notifier		= viommu_mmu_notifier_free,
+};
+
+/* Allocate or get existing MMU notifier for this {domain, mm} pair */
+static struct iommu_psdtable_mmu_notifier *
+viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
+			u32 asid_bits)
+{
+	int ret;
+	struct iommu_psdtable_mmu_notifier *viommu_mn;
+	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
+
+	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
+		if (viommu_mn->mn.mm == mm) {
+			refcount_inc(&viommu_mn->refs);
+			return viommu_mn;
+		}
+	}
+
+	mutex_lock(&viommu_asid_lock);
+	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
+						asid_bits);
+	mutex_unlock(&viommu_asid_lock);
+	if (IS_ERR(viommu_mn))
+		return ERR_CAST(viommu_mn);
+
+	refcount_set(&viommu_mn->refs, 1);
+	viommu_mn->cookie = vdomain;
+	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
+
+	ret = mmu_notifier_register(&viommu_mn->mn, mm);
+	if (ret)
+		goto err_free_cd;
+
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
+				   viommu_mn->vendor.cd);
+	if (ret)
+		goto err_put_notifier;
+
+	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
+	return viommu_mn;
+
+err_put_notifier:
+	/* Frees viommu_mn */
+	mmu_notifier_put(&viommu_mn->mn);
+err_free_cd:
+	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
+	return ERR_PTR(ret);
+}
+
+static void
+viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
+{
+	struct mm_struct *mm = viommu_mn->mn.mm;
+	struct viommu_domain *vdomain = viommu_mn->cookie;
+	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
+	u16 asid = viommu_mn->vendor.cd->asid;
+
+	if (!refcount_dec_and_test(&viommu_mn->refs))
+		return;
+
+	list_del(&viommu_mn->list);
+	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
+
+	/*
+	 * If we went through clear(), we've already invalidated, and no
+	 * new TLB entry can have been formed.
+	 */
+	if (!viommu_mn->cleared)
+		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
+
+	/* Frees smmu_mn */
+	mmu_notifier_put(&viommu_mn->mn);
+	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
+}
+
+static struct iommu_sva *
+__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
+{
+	int ret;
+	struct viommu_sva_bond *bond;
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
+
+	if (!vdev || !vdev->sva_enabled)
+		return ERR_PTR(-ENODEV);
+
+	/* If bind() was already called for this {dev, mm} pair, reuse it. */
+	list_for_each_entry(bond, &vdev->bonds, list) {
+		if (bond->mm == mm) {
+			refcount_inc(&bond->refs);
+			return &bond->sva;
+		}
+	}
+
+	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
+	if (!bond)
+		return ERR_PTR(-ENOMEM);
+
+	/* Allocate a PASID for this mm if necessary */
+	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
+	if (ret)
+		goto err_free_bond;
+
+	bond->mm = mm;
+	bond->sva.dev = dev;
+	refcount_set(&bond->refs, 1);
+
+	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
+	if (IS_ERR(bond->viommu_mn)) {
+		ret = PTR_ERR(bond->viommu_mn);
+		goto err_free_pasid;
+	}
+
+	list_add(&bond->list, &vdev->bonds);
+	return &bond->sva;
+
+err_free_pasid:
+	iommu_sva_free_pasid(mm);
+err_free_bond:
+	kfree(bond);
+	return ERR_PTR(ret);
+}
+
+/* closely follows arm_smmu_sva_bind() */
+static struct iommu_sva *viommu_sva_bind(struct device *dev,
+					 struct mm_struct *mm, void *drvdata)
+{
+	struct iommu_sva *handle;
+
+	mutex_lock(&sva_lock);
+	handle = __viommu_sva_bind(dev, mm);
+	mutex_unlock(&sva_lock);
+	return handle;
+}
+
+void viommu_sva_unbind(struct iommu_sva *handle)
+{
+	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
+
+	if (vdev->pri_supported)
+		iopf_queue_flush_dev(handle->dev);
+
+	mutex_lock(&sva_lock);
+	if (refcount_dec_and_test(&bond->refs)) {
+		list_del(&bond->list);
+		viommu_mmu_notifier_put(bond->viommu_mn);
+		iommu_sva_free_pasid(bond->mm);
+		kfree(bond);
+	}
+	mutex_unlock(&sva_lock);
+}
+
 static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
 {
 	/* TODO: support Stall model later */
@@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
 	.dev_feat_enabled	= viommu_dev_feature_enabled,
 	.dev_enable_feat	= viommu_dev_enable_feature,
 	.dev_disable_feat	= viommu_dev_disable_feature,
+	.sva_bind		= viommu_sva_bind,
+	.sva_unbind		= viommu_sva_unbind,
+	.sva_get_pasid		= viommu_sva_get_pasid,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 88a3db493108..c12d9b6a7243 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
 #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
 #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
 
+#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
+#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7
 #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
 #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
 #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

SVA bind and unbind implementations will allow to prepare translation
context with CPU page tables that can be programmed into host iommu
hardware to realize shared address space utilization between the CPU
and virtualized devices using virtio-iommu.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_iommu.h |   2 +
 2 files changed, 199 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 250c137a211b..08f1294baeab 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -14,6 +14,9 @@
 #include <linux/interval_tree.h>
 #include <linux/iommu.h>
 #include <linux/io-pgtable.h>
+#include <linux/mm.h>
+#include <linux/mmu_context.h>
+#include <linux/mmu_notifier.h>
 #include <linux/module.h>
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
@@ -28,6 +31,7 @@
 #include <uapi/linux/virtio_iommu.h>
 #include "iommu-pasid-table.h"
 #include "iommu-sva-lib.h"
+#include "io-pgtable-arm.h"
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
 static DEFINE_MUTEX(sva_lock);
 static DEFINE_MUTEX(iopf_lock);
+static DEFINE_MUTEX(viommu_asid_lock);
 
 struct viommu_dev_pri_work {
 	struct work_struct		work;
@@ -88,10 +93,22 @@ struct viommu_mapping {
 struct viommu_mm {
 	int				pasid;
 	u64				archid;
+	struct viommu_sva_bond		*bond;
 	struct io_pgtable_ops		*ops;
 	struct viommu_domain		*domain;
 };
 
+struct viommu_sva_bond {
+	struct iommu_sva		sva;
+	struct mm_struct		*mm;
+	struct iommu_psdtable_mmu_notifier	*viommu_mn;
+	struct list_head		list;
+	refcount_t			refs;
+};
+
+#define sva_to_viommu_bond(handle) \
+	container_of(handle, struct viommu_sva_bond, sva)
+
 struct viommu_domain {
 	struct iommu_domain		domain;
 	struct viommu_dev		*viommu;
@@ -136,6 +153,7 @@ struct viommu_endpoint {
 	bool				pri_supported;
 	bool				sva_enabled;
 	bool				iopf_enabled;
+	struct list_head		bonds;
 };
 
 struct viommu_ep_entry {
@@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 
 		pst_cfg->iommu_dev = viommu->dev->parent;
 
+		mutex_lock(&viommu_asid_lock);
 		/* Prepare PASID tables info to allocate a new table */
 		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
 		if (ret)
-			return ret;
+			goto err_out_unlock;
 
 		ret = iommu_psdtable_alloc(tbl, pst_cfg);
 		if (ret)
-			return ret;
+			goto err_out_unlock;
 
 		pst_cfg->iommu_dev = viommu->dev->parent;
 		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
@@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 			if (ret)
 				goto err_free_ops;
 		}
+		mutex_unlock(&viommu_asid_lock);
 	} else {
 		/* TODO: otherwise, check for compatibility with vdev. */
 		return -ENOSYS;
@@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 err_free_psdtable:
 	iommu_psdtable_free(tbl, &tbl->cfg);
 
+err_out_unlock:
+	mutex_unlock(&viommu_asid_lock);
 	return ret;
 }
 
@@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
 	vdev->dev = dev;
 	vdev->viommu = viommu;
 	INIT_LIST_HEAD(&vdev->resv_regions);
+	INIT_LIST_HEAD(&vdev->bonds);
 	dev_iommu_priv_set(dev, vdev);
 
 	if (viommu->probe_size) {
@@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
+{
+	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
+
+	return bond->mm->pasid;
+}
+
+static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
+{
+	kfree(mn_to_pstiommu(mn));
+}
+
+static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
+	.free_notifier		= viommu_mmu_notifier_free,
+};
+
+/* Allocate or get existing MMU notifier for this {domain, mm} pair */
+static struct iommu_psdtable_mmu_notifier *
+viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
+			u32 asid_bits)
+{
+	int ret;
+	struct iommu_psdtable_mmu_notifier *viommu_mn;
+	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
+
+	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
+		if (viommu_mn->mn.mm == mm) {
+			refcount_inc(&viommu_mn->refs);
+			return viommu_mn;
+		}
+	}
+
+	mutex_lock(&viommu_asid_lock);
+	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
+						asid_bits);
+	mutex_unlock(&viommu_asid_lock);
+	if (IS_ERR(viommu_mn))
+		return ERR_CAST(viommu_mn);
+
+	refcount_set(&viommu_mn->refs, 1);
+	viommu_mn->cookie = vdomain;
+	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
+
+	ret = mmu_notifier_register(&viommu_mn->mn, mm);
+	if (ret)
+		goto err_free_cd;
+
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
+				   viommu_mn->vendor.cd);
+	if (ret)
+		goto err_put_notifier;
+
+	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
+	return viommu_mn;
+
+err_put_notifier:
+	/* Frees viommu_mn */
+	mmu_notifier_put(&viommu_mn->mn);
+err_free_cd:
+	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
+	return ERR_PTR(ret);
+}
+
+static void
+viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
+{
+	struct mm_struct *mm = viommu_mn->mn.mm;
+	struct viommu_domain *vdomain = viommu_mn->cookie;
+	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
+	u16 asid = viommu_mn->vendor.cd->asid;
+
+	if (!refcount_dec_and_test(&viommu_mn->refs))
+		return;
+
+	list_del(&viommu_mn->list);
+	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
+
+	/*
+	 * If we went through clear(), we've already invalidated, and no
+	 * new TLB entry can have been formed.
+	 */
+	if (!viommu_mn->cleared)
+		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
+
+	/* Frees smmu_mn */
+	mmu_notifier_put(&viommu_mn->mn);
+	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
+}
+
+static struct iommu_sva *
+__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
+{
+	int ret;
+	struct viommu_sva_bond *bond;
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
+
+	if (!vdev || !vdev->sva_enabled)
+		return ERR_PTR(-ENODEV);
+
+	/* If bind() was already called for this {dev, mm} pair, reuse it. */
+	list_for_each_entry(bond, &vdev->bonds, list) {
+		if (bond->mm == mm) {
+			refcount_inc(&bond->refs);
+			return &bond->sva;
+		}
+	}
+
+	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
+	if (!bond)
+		return ERR_PTR(-ENOMEM);
+
+	/* Allocate a PASID for this mm if necessary */
+	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
+	if (ret)
+		goto err_free_bond;
+
+	bond->mm = mm;
+	bond->sva.dev = dev;
+	refcount_set(&bond->refs, 1);
+
+	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
+	if (IS_ERR(bond->viommu_mn)) {
+		ret = PTR_ERR(bond->viommu_mn);
+		goto err_free_pasid;
+	}
+
+	list_add(&bond->list, &vdev->bonds);
+	return &bond->sva;
+
+err_free_pasid:
+	iommu_sva_free_pasid(mm);
+err_free_bond:
+	kfree(bond);
+	return ERR_PTR(ret);
+}
+
+/* closely follows arm_smmu_sva_bind() */
+static struct iommu_sva *viommu_sva_bind(struct device *dev,
+					 struct mm_struct *mm, void *drvdata)
+{
+	struct iommu_sva *handle;
+
+	mutex_lock(&sva_lock);
+	handle = __viommu_sva_bind(dev, mm);
+	mutex_unlock(&sva_lock);
+	return handle;
+}
+
+void viommu_sva_unbind(struct iommu_sva *handle)
+{
+	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
+
+	if (vdev->pri_supported)
+		iopf_queue_flush_dev(handle->dev);
+
+	mutex_lock(&sva_lock);
+	if (refcount_dec_and_test(&bond->refs)) {
+		list_del(&bond->list);
+		viommu_mmu_notifier_put(bond->viommu_mn);
+		iommu_sva_free_pasid(bond->mm);
+		kfree(bond);
+	}
+	mutex_unlock(&sva_lock);
+}
+
 static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
 {
 	/* TODO: support Stall model later */
@@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
 	.dev_feat_enabled	= viommu_dev_feature_enabled,
 	.dev_enable_feat	= viommu_dev_enable_feature,
 	.dev_disable_feat	= viommu_dev_disable_feature,
+	.sva_bind		= viommu_sva_bind,
+	.sva_unbind		= viommu_sva_unbind,
+	.sva_get_pasid		= viommu_sva_get_pasid,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 88a3db493108..c12d9b6a7243 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
 #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
 #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
 
+#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
+#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7
 #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
 #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
 #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

SVA bind and unbind implementations will allow to prepare translation
context with CPU page tables that can be programmed into host iommu
hardware to realize shared address space utilization between the CPU
and virtualized devices using virtio-iommu.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_iommu.h |   2 +
 2 files changed, 199 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 250c137a211b..08f1294baeab 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -14,6 +14,9 @@
 #include <linux/interval_tree.h>
 #include <linux/iommu.h>
 #include <linux/io-pgtable.h>
+#include <linux/mm.h>
+#include <linux/mmu_context.h>
+#include <linux/mmu_notifier.h>
 #include <linux/module.h>
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
@@ -28,6 +31,7 @@
 #include <uapi/linux/virtio_iommu.h>
 #include "iommu-pasid-table.h"
 #include "iommu-sva-lib.h"
+#include "io-pgtable-arm.h"
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
 
 static DEFINE_MUTEX(sva_lock);
 static DEFINE_MUTEX(iopf_lock);
+static DEFINE_MUTEX(viommu_asid_lock);
 
 struct viommu_dev_pri_work {
 	struct work_struct		work;
@@ -88,10 +93,22 @@ struct viommu_mapping {
 struct viommu_mm {
 	int				pasid;
 	u64				archid;
+	struct viommu_sva_bond		*bond;
 	struct io_pgtable_ops		*ops;
 	struct viommu_domain		*domain;
 };
 
+struct viommu_sva_bond {
+	struct iommu_sva		sva;
+	struct mm_struct		*mm;
+	struct iommu_psdtable_mmu_notifier	*viommu_mn;
+	struct list_head		list;
+	refcount_t			refs;
+};
+
+#define sva_to_viommu_bond(handle) \
+	container_of(handle, struct viommu_sva_bond, sva)
+
 struct viommu_domain {
 	struct iommu_domain		domain;
 	struct viommu_dev		*viommu;
@@ -136,6 +153,7 @@ struct viommu_endpoint {
 	bool				pri_supported;
 	bool				sva_enabled;
 	bool				iopf_enabled;
+	struct list_head		bonds;
 };
 
 struct viommu_ep_entry {
@@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 
 		pst_cfg->iommu_dev = viommu->dev->parent;
 
+		mutex_lock(&viommu_asid_lock);
 		/* Prepare PASID tables info to allocate a new table */
 		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
 		if (ret)
-			return ret;
+			goto err_out_unlock;
 
 		ret = iommu_psdtable_alloc(tbl, pst_cfg);
 		if (ret)
-			return ret;
+			goto err_out_unlock;
 
 		pst_cfg->iommu_dev = viommu->dev->parent;
 		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
@@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 			if (ret)
 				goto err_free_ops;
 		}
+		mutex_unlock(&viommu_asid_lock);
 	} else {
 		/* TODO: otherwise, check for compatibility with vdev. */
 		return -ENOSYS;
@@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
 err_free_psdtable:
 	iommu_psdtable_free(tbl, &tbl->cfg);
 
+err_out_unlock:
+	mutex_unlock(&viommu_asid_lock);
 	return ret;
 }
 
@@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
 	vdev->dev = dev;
 	vdev->viommu = viommu;
 	INIT_LIST_HEAD(&vdev->resv_regions);
+	INIT_LIST_HEAD(&vdev->bonds);
 	dev_iommu_priv_set(dev, vdev);
 
 	if (viommu->probe_size) {
@@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
+{
+	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
+
+	return bond->mm->pasid;
+}
+
+static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
+{
+	kfree(mn_to_pstiommu(mn));
+}
+
+static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
+	.free_notifier		= viommu_mmu_notifier_free,
+};
+
+/* Allocate or get existing MMU notifier for this {domain, mm} pair */
+static struct iommu_psdtable_mmu_notifier *
+viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
+			u32 asid_bits)
+{
+	int ret;
+	struct iommu_psdtable_mmu_notifier *viommu_mn;
+	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
+
+	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
+		if (viommu_mn->mn.mm == mm) {
+			refcount_inc(&viommu_mn->refs);
+			return viommu_mn;
+		}
+	}
+
+	mutex_lock(&viommu_asid_lock);
+	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
+						asid_bits);
+	mutex_unlock(&viommu_asid_lock);
+	if (IS_ERR(viommu_mn))
+		return ERR_CAST(viommu_mn);
+
+	refcount_set(&viommu_mn->refs, 1);
+	viommu_mn->cookie = vdomain;
+	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
+
+	ret = mmu_notifier_register(&viommu_mn->mn, mm);
+	if (ret)
+		goto err_free_cd;
+
+	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
+				   viommu_mn->vendor.cd);
+	if (ret)
+		goto err_put_notifier;
+
+	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
+	return viommu_mn;
+
+err_put_notifier:
+	/* Frees viommu_mn */
+	mmu_notifier_put(&viommu_mn->mn);
+err_free_cd:
+	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
+	return ERR_PTR(ret);
+}
+
+static void
+viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
+{
+	struct mm_struct *mm = viommu_mn->mn.mm;
+	struct viommu_domain *vdomain = viommu_mn->cookie;
+	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
+	u16 asid = viommu_mn->vendor.cd->asid;
+
+	if (!refcount_dec_and_test(&viommu_mn->refs))
+		return;
+
+	list_del(&viommu_mn->list);
+	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
+
+	/*
+	 * If we went through clear(), we've already invalidated, and no
+	 * new TLB entry can have been formed.
+	 */
+	if (!viommu_mn->cleared)
+		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
+
+	/* Frees smmu_mn */
+	mmu_notifier_put(&viommu_mn->mn);
+	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
+}
+
+static struct iommu_sva *
+__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
+{
+	int ret;
+	struct viommu_sva_bond *bond;
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
+
+	if (!vdev || !vdev->sva_enabled)
+		return ERR_PTR(-ENODEV);
+
+	/* If bind() was already called for this {dev, mm} pair, reuse it. */
+	list_for_each_entry(bond, &vdev->bonds, list) {
+		if (bond->mm == mm) {
+			refcount_inc(&bond->refs);
+			return &bond->sva;
+		}
+	}
+
+	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
+	if (!bond)
+		return ERR_PTR(-ENOMEM);
+
+	/* Allocate a PASID for this mm if necessary */
+	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
+	if (ret)
+		goto err_free_bond;
+
+	bond->mm = mm;
+	bond->sva.dev = dev;
+	refcount_set(&bond->refs, 1);
+
+	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
+	if (IS_ERR(bond->viommu_mn)) {
+		ret = PTR_ERR(bond->viommu_mn);
+		goto err_free_pasid;
+	}
+
+	list_add(&bond->list, &vdev->bonds);
+	return &bond->sva;
+
+err_free_pasid:
+	iommu_sva_free_pasid(mm);
+err_free_bond:
+	kfree(bond);
+	return ERR_PTR(ret);
+}
+
+/* closely follows arm_smmu_sva_bind() */
+static struct iommu_sva *viommu_sva_bind(struct device *dev,
+					 struct mm_struct *mm, void *drvdata)
+{
+	struct iommu_sva *handle;
+
+	mutex_lock(&sva_lock);
+	handle = __viommu_sva_bind(dev, mm);
+	mutex_unlock(&sva_lock);
+	return handle;
+}
+
+void viommu_sva_unbind(struct iommu_sva *handle)
+{
+	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
+
+	if (vdev->pri_supported)
+		iopf_queue_flush_dev(handle->dev);
+
+	mutex_lock(&sva_lock);
+	if (refcount_dec_and_test(&bond->refs)) {
+		list_del(&bond->list);
+		viommu_mmu_notifier_put(bond->viommu_mn);
+		iommu_sva_free_pasid(bond->mm);
+		kfree(bond);
+	}
+	mutex_unlock(&sva_lock);
+}
+
 static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
 {
 	/* TODO: support Stall model later */
@@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
 	.dev_feat_enabled	= viommu_dev_feature_enabled,
 	.dev_enable_feat	= viommu_dev_enable_feature,
 	.dev_disable_feat	= viommu_dev_disable_feature,
+	.sva_bind		= viommu_sva_bind,
+	.sva_unbind		= viommu_sva_unbind,
+	.sva_get_pasid		= viommu_sva_get_pasid,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 88a3db493108..c12d9b6a7243 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
 #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
 #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
 
+#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
+#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7
 #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
 #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
 #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Once the page faults are handled, the response has to be sent to
virtio-iommu backend, from where it can be sent to the host to
prepare the response to a generated io page fault by the device.
Add a new virt-queue request type to handle this.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index c12d9b6a7243..1b174b98663a 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -48,6 +48,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_PROBE			0x05
 #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
 #define VIRTIO_IOMMU_T_INVALIDATE		0x07
+#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
 	__u8					reserved[3];
 };
 
+struct virtio_iommu_req_page_resp {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)
+	__le32					flags;
+	__le32					pasid;
+	__le32					grpid;
+#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
+#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
+#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
+	__le16					resp_code;
+	__u8					pasid_valid;
+	__u8					reserved[9];
+	struct virtio_iommu_req_tail		tail;
+};
+
 struct virtio_iommu_req_attach {
 	struct virtio_iommu_req_head		head;
 	__le32					domain;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Once the page faults are handled, the response has to be sent to
virtio-iommu backend, from where it can be sent to the host to
prepare the response to a generated io page fault by the device.
Add a new virt-queue request type to handle this.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index c12d9b6a7243..1b174b98663a 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -48,6 +48,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_PROBE			0x05
 #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
 #define VIRTIO_IOMMU_T_INVALIDATE		0x07
+#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
 	__u8					reserved[3];
 };
 
+struct virtio_iommu_req_page_resp {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)
+	__le32					flags;
+	__le32					pasid;
+	__le32					grpid;
+#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
+#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
+#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
+	__le16					resp_code;
+	__u8					pasid_valid;
+	__u8					reserved[9];
+	struct virtio_iommu_req_tail		tail;
+};
+
 struct virtio_iommu_req_attach {
 	struct virtio_iommu_req_head		head;
 	__le32					domain;
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Once the page faults are handled, the response has to be sent to
virtio-iommu backend, from where it can be sent to the host to
prepare the response to a generated io page fault by the device.
Add a new virt-queue request type to handle this.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index c12d9b6a7243..1b174b98663a 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -48,6 +48,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_PROBE			0x05
 #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
 #define VIRTIO_IOMMU_T_INVALIDATE		0x07
+#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
 	__u8					reserved[3];
 };
 
+struct virtio_iommu_req_page_resp {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)
+	__le32					flags;
+	__le32					pasid;
+	__le32					grpid;
+#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
+#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
+#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
+	__le16					resp_code;
+	__u8					pasid_valid;
+	__u8					reserved[9];
+	struct virtio_iommu_req_tail		tail;
+};
+
 struct virtio_iommu_req_attach {
 	struct virtio_iommu_req_head		head;
 	__le32					domain;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 11/11] iommu/virtio: Add support to send page response
  2021-04-23  9:51 ` Vivek Gautam
  (?)
@ 2021-04-23  9:51   ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add page_response iommu ops callback to send page response to
the device that generated io page fault.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 47 ++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 08f1294baeab..6d62d9eae452 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1778,6 +1778,52 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static int viommu_page_response(struct device *dev,
+				struct iommu_fault_event *evt,
+				struct iommu_page_response *resp)
+{
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct viommu_dev *viommu = vdev->viommu;
+	bool pasid_valid = resp->flags & IOMMU_PAGE_RESP_PASID_VALID;
+	struct virtio_iommu_req_page_resp req = {
+		.head.type	= VIRTIO_IOMMU_T_PAGE_RESP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.endpoint	= cpu_to_le32(fwspec->ids[0]),
+	};
+
+	if (vdev->pri_supported) {
+		bool needs_pasid = (evt->fault.prm.flags &
+				    IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID);
+
+		req.pasid_valid	= needs_pasid && pasid_valid;
+		req.flags	= cpu_to_le32((needs_pasid && pasid_valid) ?
+				   VIRTIO_IOMMU_PAGE_RESP_PASID_VALID : 0);
+		req.pasid	= cpu_to_le32(resp->pasid);
+		req.grpid	= cpu_to_le32(resp->grpid);
+
+		switch (resp->code) {
+		case IOMMU_PAGE_RESP_FAILURE:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_FAILURE);
+			break;
+		case IOMMU_PAGE_RESP_INVALID:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_INVALID);
+			break;
+		case IOMMU_PAGE_RESP_SUCCESS:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_SUCCESS);
+			break;
+		default:
+			return -EINVAL;
+		}
+	} else {
+		return -ENODEV;
+	}
+
+	return viommu_send_req_sync(viommu, &req, sizeof(req));
+}
+
 static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
 {
 	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
@@ -2155,6 +2201,7 @@ static struct iommu_ops viommu_ops = {
 	.sva_bind		= viommu_sva_bind,
 	.sva_unbind		= viommu_sva_unbind,
 	.sva_get_pasid		= viommu_sva_get_pasid,
+	.page_response		= viommu_page_response,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 11/11] iommu/virtio: Add support to send page response
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: jean-philippe, kevin.tian, mst, will.deacon, Vivek Gautam, robin.murphy

Add page_response iommu ops callback to send page response to
the device that generated io page fault.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 47 ++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 08f1294baeab..6d62d9eae452 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1778,6 +1778,52 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static int viommu_page_response(struct device *dev,
+				struct iommu_fault_event *evt,
+				struct iommu_page_response *resp)
+{
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct viommu_dev *viommu = vdev->viommu;
+	bool pasid_valid = resp->flags & IOMMU_PAGE_RESP_PASID_VALID;
+	struct virtio_iommu_req_page_resp req = {
+		.head.type	= VIRTIO_IOMMU_T_PAGE_RESP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.endpoint	= cpu_to_le32(fwspec->ids[0]),
+	};
+
+	if (vdev->pri_supported) {
+		bool needs_pasid = (evt->fault.prm.flags &
+				    IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID);
+
+		req.pasid_valid	= needs_pasid && pasid_valid;
+		req.flags	= cpu_to_le32((needs_pasid && pasid_valid) ?
+				   VIRTIO_IOMMU_PAGE_RESP_PASID_VALID : 0);
+		req.pasid	= cpu_to_le32(resp->pasid);
+		req.grpid	= cpu_to_le32(resp->grpid);
+
+		switch (resp->code) {
+		case IOMMU_PAGE_RESP_FAILURE:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_FAILURE);
+			break;
+		case IOMMU_PAGE_RESP_INVALID:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_INVALID);
+			break;
+		case IOMMU_PAGE_RESP_SUCCESS:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_SUCCESS);
+			break;
+		default:
+			return -EINVAL;
+		}
+	} else {
+		return -ENODEV;
+	}
+
+	return viommu_send_req_sync(viommu, &req, sizeof(req));
+}
+
 static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
 {
 	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
@@ -2155,6 +2201,7 @@ static struct iommu_ops viommu_ops = {
 	.sva_bind		= viommu_sva_bind,
 	.sva_unbind		= viommu_sva_unbind,
 	.sva_get_pasid		= viommu_sva_get_pasid,
+	.page_response		= viommu_page_response,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* [PATCH RFC v1 11/11] iommu/virtio: Add support to send page response
@ 2021-04-23  9:51   ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-04-23  9:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, virtualization
  Cc: joro, will.deacon, mst, robin.murphy, jean-philippe, eric.auger,
	kevin.tian, jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Vivek Gautam

Add page_response iommu ops callback to send page response to
the device that generated io page fault.

Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
---
 drivers/iommu/virtio-iommu.c | 47 ++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 08f1294baeab..6d62d9eae452 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1778,6 +1778,52 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
+static int viommu_page_response(struct device *dev,
+				struct iommu_fault_event *evt,
+				struct iommu_page_response *resp)
+{
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct viommu_dev *viommu = vdev->viommu;
+	bool pasid_valid = resp->flags & IOMMU_PAGE_RESP_PASID_VALID;
+	struct virtio_iommu_req_page_resp req = {
+		.head.type	= VIRTIO_IOMMU_T_PAGE_RESP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.endpoint	= cpu_to_le32(fwspec->ids[0]),
+	};
+
+	if (vdev->pri_supported) {
+		bool needs_pasid = (evt->fault.prm.flags &
+				    IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID);
+
+		req.pasid_valid	= needs_pasid && pasid_valid;
+		req.flags	= cpu_to_le32((needs_pasid && pasid_valid) ?
+				   VIRTIO_IOMMU_PAGE_RESP_PASID_VALID : 0);
+		req.pasid	= cpu_to_le32(resp->pasid);
+		req.grpid	= cpu_to_le32(resp->grpid);
+
+		switch (resp->code) {
+		case IOMMU_PAGE_RESP_FAILURE:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_FAILURE);
+			break;
+		case IOMMU_PAGE_RESP_INVALID:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_INVALID);
+			break;
+		case IOMMU_PAGE_RESP_SUCCESS:
+			req.resp_code = cpu_to_le16(VIRTIO_IOMMU_PAGE_RESP_SUCCESS);
+			break;
+		default:
+			return -EINVAL;
+		}
+	} else {
+		return -ENODEV;
+	}
+
+	return viommu_send_req_sync(viommu, &req, sizeof(req));
+}
+
 static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
 {
 	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
@@ -2155,6 +2201,7 @@ static struct iommu_ops viommu_ops = {
 	.sva_bind		= viommu_sva_bind,
 	.sva_unbind		= viommu_sva_unbind,
 	.sva_get_pasid		= viommu_sva_get_pasid,
+	.page_response		= viommu_page_response,
 };
 
 static int viommu_init_vqs(struct viommu_dev *viommu)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 15:58     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:58 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

Hi Vivek,

Thanks a lot for your work on this

On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
> Add fault information for group-id and necessary flags for page
> request faults that can be handled by page fault handler in
> virtio-iommu driver.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Cc: Liu Yi L <yi.l.liu@intel.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index f8bf927a0689..accc3318ce46 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>  #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>  
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)

I don't think this one is necessary here. The NEEDS_PASID flags added by
commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
helps Linux keep track of things internally. It does tell the fault
handler whether to reply with PASID or not, but we don't need that here.
The virtio-iommu driver knows whether a PASID is required by looking at
the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
declare that the endpoint supports recoverable faults anyway, so "PASID
required in response" can go through there as well.

> +
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
> +
>  struct virtio_iommu_fault {
>  	__u8					reason;
>  	__u8					reserved[3];
>  	__le16					flt_type;
>  	__u8					reserved2[2];
> +	/* flags is actually permission flags */

It's also used for declaring validity of fields.
VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
valid, so all the other flags introduced by this patch can go in here.

>  	__le32					flags;
> +	/* flags for PASID and Page request handling info */
> +	__le32					pr_evt_flags;
>  	__le32					endpoint;
>  	__le32					pasid;
> +	__le32					grpid;

I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
seems more than enough.

New fields must be appended at the end of the struct, because old drivers
will expect to find the 'endpoint' field at this offset. You could remove
'reserved3' while adding 'grpid', to keep the struct layout.

>  	__u8					reserved3[4];
>  	__le64					address;
>  	__u8					reserved4[8];


So the base structure, currently in the spec, looks like this:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 reserved1;
		le64 address;
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)

The extended struct could be:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 pasid;
		le64 address;
		/* Page request group ID */
		le16 group_id;
		u8   reserved1[6];
		/* For VT-d private data */
		le64 private_data[2];
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
	/* Last fault in group */
	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
	/* Fault is a recoverable page request and requires a response */
	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)

	/* address field is valid */
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
	/* pasid field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
	/* group_id field is valid */
	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
	/* private data field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-21 15:58     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:58 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson

Hi Vivek,

Thanks a lot for your work on this

On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
> Add fault information for group-id and necessary flags for page
> request faults that can be handled by page fault handler in
> virtio-iommu driver.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Cc: Liu Yi L <yi.l.liu@intel.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index f8bf927a0689..accc3318ce46 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>  #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>  
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)

I don't think this one is necessary here. The NEEDS_PASID flags added by
commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
helps Linux keep track of things internally. It does tell the fault
handler whether to reply with PASID or not, but we don't need that here.
The virtio-iommu driver knows whether a PASID is required by looking at
the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
declare that the endpoint supports recoverable faults anyway, so "PASID
required in response" can go through there as well.

> +
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
> +
>  struct virtio_iommu_fault {
>  	__u8					reason;
>  	__u8					reserved[3];
>  	__le16					flt_type;
>  	__u8					reserved2[2];
> +	/* flags is actually permission flags */

It's also used for declaring validity of fields.
VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
valid, so all the other flags introduced by this patch can go in here.

>  	__le32					flags;
> +	/* flags for PASID and Page request handling info */
> +	__le32					pr_evt_flags;
>  	__le32					endpoint;
>  	__le32					pasid;
> +	__le32					grpid;

I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
seems more than enough.

New fields must be appended at the end of the struct, because old drivers
will expect to find the 'endpoint' field at this offset. You could remove
'reserved3' while adding 'grpid', to keep the struct layout.

>  	__u8					reserved3[4];
>  	__le64					address;
>  	__u8					reserved4[8];


So the base structure, currently in the spec, looks like this:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 reserved1;
		le64 address;
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)

The extended struct could be:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 pasid;
		le64 address;
		/* Page request group ID */
		le16 group_id;
		u8   reserved1[6];
		/* For VT-d private data */
		le64 private_data[2];
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
	/* Last fault in group */
	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
	/* Fault is a recoverable page request and requires a response */
	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)

	/* address field is valid */
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
	/* pasid field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
	/* group_id field is valid */
	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
	/* private data field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks,
Jean

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-21 15:58     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:58 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, Alex Williamson, mst, will.deacon, linux-kernel,
	virtualization, iommu, robin.murphy, linux-arm-kernel

Hi Vivek,

Thanks a lot for your work on this

On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
> Add fault information for group-id and necessary flags for page
> request faults that can be handled by page fault handler in
> virtio-iommu driver.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Cc: Liu Yi L <yi.l.liu@intel.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index f8bf927a0689..accc3318ce46 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>  #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>  
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)

I don't think this one is necessary here. The NEEDS_PASID flags added by
commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
helps Linux keep track of things internally. It does tell the fault
handler whether to reply with PASID or not, but we don't need that here.
The virtio-iommu driver knows whether a PASID is required by looking at
the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
declare that the endpoint supports recoverable faults anyway, so "PASID
required in response" can go through there as well.

> +
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
> +
>  struct virtio_iommu_fault {
>  	__u8					reason;
>  	__u8					reserved[3];
>  	__le16					flt_type;
>  	__u8					reserved2[2];
> +	/* flags is actually permission flags */

It's also used for declaring validity of fields.
VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
valid, so all the other flags introduced by this patch can go in here.

>  	__le32					flags;
> +	/* flags for PASID and Page request handling info */
> +	__le32					pr_evt_flags;
>  	__le32					endpoint;
>  	__le32					pasid;
> +	__le32					grpid;

I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
seems more than enough.

New fields must be appended at the end of the struct, because old drivers
will expect to find the 'endpoint' field at this offset. You could remove
'reserved3' while adding 'grpid', to keep the struct layout.

>  	__u8					reserved3[4];
>  	__le64					address;
>  	__u8					reserved4[8];


So the base structure, currently in the spec, looks like this:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 reserved1;
		le64 address;
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)

The extended struct could be:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 pasid;
		le64 address;
		/* Page request group ID */
		le16 group_id;
		u8   reserved1[6];
		/* For VT-d private data */
		le64 private_data[2];
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
	/* Last fault in group */
	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
	/* Fault is a recoverable page request and requires a response */
	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)

	/* address field is valid */
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
	/* pasid field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
	/* group_id field is valid */
	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
	/* private data field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks,
Jean
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-21 15:58     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:58 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson

Hi Vivek,

Thanks a lot for your work on this

On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
> Add fault information for group-id and necessary flags for page
> request faults that can be handled by page fault handler in
> virtio-iommu driver.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Cc: Liu Yi L <yi.l.liu@intel.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index f8bf927a0689..accc3318ce46 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>  #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>  
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)

I don't think this one is necessary here. The NEEDS_PASID flags added by
commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
helps Linux keep track of things internally. It does tell the fault
handler whether to reply with PASID or not, but we don't need that here.
The virtio-iommu driver knows whether a PASID is required by looking at
the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
declare that the endpoint supports recoverable faults anyway, so "PASID
required in response" can go through there as well.

> +
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
> +
>  struct virtio_iommu_fault {
>  	__u8					reason;
>  	__u8					reserved[3];
>  	__le16					flt_type;
>  	__u8					reserved2[2];
> +	/* flags is actually permission flags */

It's also used for declaring validity of fields.
VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
valid, so all the other flags introduced by this patch can go in here.

>  	__le32					flags;
> +	/* flags for PASID and Page request handling info */
> +	__le32					pr_evt_flags;
>  	__le32					endpoint;
>  	__le32					pasid;
> +	__le32					grpid;

I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
seems more than enough.

New fields must be appended at the end of the struct, because old drivers
will expect to find the 'endpoint' field at this offset. You could remove
'reserved3' while adding 'grpid', to keep the struct layout.

>  	__u8					reserved3[4];
>  	__le64					address;
>  	__u8					reserved4[8];


So the base structure, currently in the spec, looks like this:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 reserved1;
		le64 address;
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)

The extended struct could be:

	struct virtio_iommu_fault {
		u8   reason;
		u8   reserved[3];
		le32 flags;
		le32 endpoint;
		le32 pasid;
		le64 address;
		/* Page request group ID */
		le16 group_id;
		u8   reserved1[6];
		/* For VT-d private data */
		le64 private_data[2];
	};

	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
	/* Last fault in group */
	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
	/* Fault is a recoverable page request and requires a response */
	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)

	/* address field is valid */
	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
	/* pasid field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
	/* group_id field is valid */
	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
	/* private data field is valid */
	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks,
Jean

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 15:59     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:59 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
> Keeping a record of list of endpoints that are served by the virtio-iommu
> device would help in redirecting the requests of page faults to the
> correct endpoint device to handle such requests.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 50039070e2aa..c970f386f031 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -48,6 +48,7 @@ struct viommu_dev {
>  	spinlock_t			request_lock;
>  	struct list_head		requests;
>  	void				*evts;
> +	struct list_head		endpoints;

As we're going to search by ID, an xarray or rb_tree would be more
appropriate than a list

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>  	void				*pgtf;
>  };
>  
> +struct viommu_ep_entry {
> +	u32				eid;
> +	struct viommu_endpoint		*vdev;
> +	struct list_head		list;
> +};

No need for a separate struct, I think you can just add the list head and
id into viommu_endpoint.

> +
>  struct viommu_request {
>  	struct list_head		list;
>  	void				*writeback;
> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	size_t probe_len;
>  	struct virtio_iommu_req_probe *probe;
>  	struct virtio_iommu_probe_property *prop;
> +	struct viommu_ep_entry *ep;
>  	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  
> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  		prop = (void *)probe->properties + cur;
>  		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>  	}
> +	if (ret)
> +		goto out_free;
> +
> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> +	if (!ep) {
> +		ret = -ENOMEM;
> +		goto out_free;
> +	}
> +	ep->eid = probe->endpoint;
> +	ep->vdev = vdev;
> +
> +	list_add(&ep->list, &viommu->endpoints);

This should be in viommu_probe_device() (viommu_probe_endpoint() is only
called if F_PROBE is negotiated). I think we need a lock for this
list/xarray

Thanks,
Jean

>  
>  out_free:
>  	kfree(probe);
> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  	INIT_LIST_HEAD(&viommu->requests);
> +	INIT_LIST_HEAD(&viommu->endpoints);
>  
>  	ret = viommu_init_vqs(viommu);
>  	if (ret)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-09-21 15:59     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:59 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
> Keeping a record of list of endpoints that are served by the virtio-iommu
> device would help in redirecting the requests of page faults to the
> correct endpoint device to handle such requests.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 50039070e2aa..c970f386f031 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -48,6 +48,7 @@ struct viommu_dev {
>  	spinlock_t			request_lock;
>  	struct list_head		requests;
>  	void				*evts;
> +	struct list_head		endpoints;

As we're going to search by ID, an xarray or rb_tree would be more
appropriate than a list

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>  	void				*pgtf;
>  };
>  
> +struct viommu_ep_entry {
> +	u32				eid;
> +	struct viommu_endpoint		*vdev;
> +	struct list_head		list;
> +};

No need for a separate struct, I think you can just add the list head and
id into viommu_endpoint.

> +
>  struct viommu_request {
>  	struct list_head		list;
>  	void				*writeback;
> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	size_t probe_len;
>  	struct virtio_iommu_req_probe *probe;
>  	struct virtio_iommu_probe_property *prop;
> +	struct viommu_ep_entry *ep;
>  	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  
> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  		prop = (void *)probe->properties + cur;
>  		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>  	}
> +	if (ret)
> +		goto out_free;
> +
> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> +	if (!ep) {
> +		ret = -ENOMEM;
> +		goto out_free;
> +	}
> +	ep->eid = probe->endpoint;
> +	ep->vdev = vdev;
> +
> +	list_add(&ep->list, &viommu->endpoints);

This should be in viommu_probe_device() (viommu_probe_endpoint() is only
called if F_PROBE is negotiated). I think we need a lock for this
list/xarray

Thanks,
Jean

>  
>  out_free:
>  	kfree(probe);
> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  	INIT_LIST_HEAD(&viommu->requests);
> +	INIT_LIST_HEAD(&viommu->endpoints);
>  
>  	ret = viommu_init_vqs(viommu);
>  	if (ret)
> -- 
> 2.17.1
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-09-21 15:59     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:59 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
> Keeping a record of list of endpoints that are served by the virtio-iommu
> device would help in redirecting the requests of page faults to the
> correct endpoint device to handle such requests.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 50039070e2aa..c970f386f031 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -48,6 +48,7 @@ struct viommu_dev {
>  	spinlock_t			request_lock;
>  	struct list_head		requests;
>  	void				*evts;
> +	struct list_head		endpoints;

As we're going to search by ID, an xarray or rb_tree would be more
appropriate than a list

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>  	void				*pgtf;
>  };
>  
> +struct viommu_ep_entry {
> +	u32				eid;
> +	struct viommu_endpoint		*vdev;
> +	struct list_head		list;
> +};

No need for a separate struct, I think you can just add the list head and
id into viommu_endpoint.

> +
>  struct viommu_request {
>  	struct list_head		list;
>  	void				*writeback;
> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	size_t probe_len;
>  	struct virtio_iommu_req_probe *probe;
>  	struct virtio_iommu_probe_property *prop;
> +	struct viommu_ep_entry *ep;
>  	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  
> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  		prop = (void *)probe->properties + cur;
>  		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>  	}
> +	if (ret)
> +		goto out_free;
> +
> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> +	if (!ep) {
> +		ret = -ENOMEM;
> +		goto out_free;
> +	}
> +	ep->eid = probe->endpoint;
> +	ep->vdev = vdev;
> +
> +	list_add(&ep->list, &viommu->endpoints);

This should be in viommu_probe_device() (viommu_probe_endpoint() is only
called if F_PROBE is negotiated). I think we need a lock for this
list/xarray

Thanks,
Jean

>  
>  out_free:
>  	kfree(probe);
> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  	INIT_LIST_HEAD(&viommu->requests);
> +	INIT_LIST_HEAD(&viommu->endpoints);
>  
>  	ret = viommu_init_vqs(viommu);
>  	if (ret)
> -- 
> 2.17.1
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-09-21 15:59     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 15:59 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
> Keeping a record of list of endpoints that are served by the virtio-iommu
> device would help in redirecting the requests of page faults to the
> correct endpoint device to handle such requests.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 50039070e2aa..c970f386f031 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -48,6 +48,7 @@ struct viommu_dev {
>  	spinlock_t			request_lock;
>  	struct list_head		requests;
>  	void				*evts;
> +	struct list_head		endpoints;

As we're going to search by ID, an xarray or rb_tree would be more
appropriate than a list

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>  	void				*pgtf;
>  };
>  
> +struct viommu_ep_entry {
> +	u32				eid;
> +	struct viommu_endpoint		*vdev;
> +	struct list_head		list;
> +};

No need for a separate struct, I think you can just add the list head and
id into viommu_endpoint.

> +
>  struct viommu_request {
>  	struct list_head		list;
>  	void				*writeback;
> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	size_t probe_len;
>  	struct virtio_iommu_req_probe *probe;
>  	struct virtio_iommu_probe_property *prop;
> +	struct viommu_ep_entry *ep;
>  	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  
> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  		prop = (void *)probe->properties + cur;
>  		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>  	}
> +	if (ret)
> +		goto out_free;
> +
> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> +	if (!ep) {
> +		ret = -ENOMEM;
> +		goto out_free;
> +	}
> +	ep->eid = probe->endpoint;
> +	ep->vdev = vdev;
> +
> +	list_add(&ep->list, &viommu->endpoints);

This should be in viommu_probe_device() (viommu_probe_endpoint() is only
called if F_PROBE is negotiated). I think we need a lock for this
list/xarray

Thanks,
Jean

>  
>  out_free:
>  	kfree(probe);
> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  	INIT_LIST_HEAD(&viommu->requests);
> +	INIT_LIST_HEAD(&viommu->endpoints);
>  
>  	ret = viommu_init_vqs(viommu);
>  	if (ret)
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 16:03     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:03 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> Redirect the incoming page faults to the registered fault handler
> that can take the fault information such as, pasid, page request
> group-id, address and pasid flags.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |  1 +
>  2 files changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index c970f386f031..fd237cad1ce5 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -37,6 +37,13 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +struct viommu_dev_pri_work {
> +	struct work_struct		work;
> +	struct viommu_dev		*dev;
> +	struct virtio_iommu_fault	*vfault;
> +	u32				endpoint;
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -49,6 +56,8 @@ struct viommu_dev {
>  	struct list_head		requests;
>  	void				*evts;
>  	struct list_head		endpoints;
> +	struct workqueue_struct		*pri_wq;
> +	struct viommu_dev_pri_work	*pri_work;

IOPF already has a workqueue, so the driver doesn't need one.
iommu_report_device_fault() should be fast enough to be called from the
event handler.

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return ret;
>  }
>  
> +static void viommu_handle_ppr(struct work_struct *work)
> +{
> +	struct viommu_dev_pri_work *pwork =
> +				container_of(work, struct viommu_dev_pri_work, work);
> +	struct viommu_dev *viommu = pwork->dev;
> +	struct virtio_iommu_fault *vfault = pwork->vfault;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_ep_entry *ep;
> +	struct iommu_fault_event fault_evt = {
> +		.fault.type = IOMMU_FAULT_PAGE_REQ,
> +	};
> +	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> +
> +	u32 flags	= le32_to_cpu(vfault->flags);
> +	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
> +	u32 endpoint	= pwork->endpoint;
> +
> +	memset(prq, 0, sizeof(struct iommu_fault_page_request));

The fault_evt struct is already initialized

> +	prq->addr = le64_to_cpu(vfault->address);
> +
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> +		prq->pasid = le32_to_cpu(vfault->pasid);
> +		prq->grpid = le32_to_cpu(vfault->grpid);
> +	}
> +
> +	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> +		prq->perm |= IOMMU_FAULT_PERM_READ;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> +		prq->perm |= IOMMU_FAULT_PERM_WRITE;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> +		prq->perm |= IOMMU_FAULT_PERM_EXEC;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> +		prq->perm |= IOMMU_FAULT_PERM_PRIV;
> +
> +	list_for_each_entry(ep, &viommu->endpoints, list) {
> +		if (ep->eid == endpoint) {
> +			vdev = ep->vdev;
> +			break;
> +		}
> +	}
> +
> +	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> +	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> +		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> +
> +	if (iommu_report_device_fault(vdev->dev, &fault_evt))
> +		dev_err(vdev->dev, "Couldn't handle page request\n");

An error likely means that nobody registered a fault handler, but we could
display a few more details about the fault that would help debug the
endpoint

> +}
> +
>  static int viommu_fault_handler(struct viommu_dev *viommu,
>  				struct virtio_iommu_fault *fault)
>  {
> @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
>  	u32 pasid	= le32_to_cpu(fault->pasid);
>  
>  	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> -		dev_info(viommu->dev, "Page request fault - unhandled\n");
> +		dev_info_ratelimited(viommu->dev,
> +				     "Page request fault from EP %u\n",
> +				     endpoint);

That's rather for debugging the virtio-iommu driver, so should be
dev_dbg() (or removed entirely)

> +
> +		viommu->pri_work->vfault = fault;
> +		viommu->pri_work->endpoint = endpoint;
> +		queue_work(viommu->pri_wq, &viommu->pri_work->work);
>  		return 0;
>  	}
>  
> @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> +	if (!viommu->pri_work)
> +		return -ENOMEM;
> +
> +	viommu->pri_work->dev = viommu;
> +
> +	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> +	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> +	if (!viommu->pri_wq)
> +		return -ENOMEM;
> +
>  	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
>  	viommu->last_domain = ~0U;
>  
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index accc3318ce46..53aa88e6b077 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
>  #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
>  #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)

Should go in the previous patch. (I'd also prefer 'privileged' because in
this context 'priv' is easily read as 'private')

Thanks,
Jean

>  #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
>  
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-09-21 16:03     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:03 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> Redirect the incoming page faults to the registered fault handler
> that can take the fault information such as, pasid, page request
> group-id, address and pasid flags.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |  1 +
>  2 files changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index c970f386f031..fd237cad1ce5 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -37,6 +37,13 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +struct viommu_dev_pri_work {
> +	struct work_struct		work;
> +	struct viommu_dev		*dev;
> +	struct virtio_iommu_fault	*vfault;
> +	u32				endpoint;
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -49,6 +56,8 @@ struct viommu_dev {
>  	struct list_head		requests;
>  	void				*evts;
>  	struct list_head		endpoints;
> +	struct workqueue_struct		*pri_wq;
> +	struct viommu_dev_pri_work	*pri_work;

IOPF already has a workqueue, so the driver doesn't need one.
iommu_report_device_fault() should be fast enough to be called from the
event handler.

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return ret;
>  }
>  
> +static void viommu_handle_ppr(struct work_struct *work)
> +{
> +	struct viommu_dev_pri_work *pwork =
> +				container_of(work, struct viommu_dev_pri_work, work);
> +	struct viommu_dev *viommu = pwork->dev;
> +	struct virtio_iommu_fault *vfault = pwork->vfault;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_ep_entry *ep;
> +	struct iommu_fault_event fault_evt = {
> +		.fault.type = IOMMU_FAULT_PAGE_REQ,
> +	};
> +	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> +
> +	u32 flags	= le32_to_cpu(vfault->flags);
> +	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
> +	u32 endpoint	= pwork->endpoint;
> +
> +	memset(prq, 0, sizeof(struct iommu_fault_page_request));

The fault_evt struct is already initialized

> +	prq->addr = le64_to_cpu(vfault->address);
> +
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> +		prq->pasid = le32_to_cpu(vfault->pasid);
> +		prq->grpid = le32_to_cpu(vfault->grpid);
> +	}
> +
> +	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> +		prq->perm |= IOMMU_FAULT_PERM_READ;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> +		prq->perm |= IOMMU_FAULT_PERM_WRITE;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> +		prq->perm |= IOMMU_FAULT_PERM_EXEC;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> +		prq->perm |= IOMMU_FAULT_PERM_PRIV;
> +
> +	list_for_each_entry(ep, &viommu->endpoints, list) {
> +		if (ep->eid == endpoint) {
> +			vdev = ep->vdev;
> +			break;
> +		}
> +	}
> +
> +	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> +	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> +		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> +
> +	if (iommu_report_device_fault(vdev->dev, &fault_evt))
> +		dev_err(vdev->dev, "Couldn't handle page request\n");

An error likely means that nobody registered a fault handler, but we could
display a few more details about the fault that would help debug the
endpoint

> +}
> +
>  static int viommu_fault_handler(struct viommu_dev *viommu,
>  				struct virtio_iommu_fault *fault)
>  {
> @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
>  	u32 pasid	= le32_to_cpu(fault->pasid);
>  
>  	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> -		dev_info(viommu->dev, "Page request fault - unhandled\n");
> +		dev_info_ratelimited(viommu->dev,
> +				     "Page request fault from EP %u\n",
> +				     endpoint);

That's rather for debugging the virtio-iommu driver, so should be
dev_dbg() (or removed entirely)

> +
> +		viommu->pri_work->vfault = fault;
> +		viommu->pri_work->endpoint = endpoint;
> +		queue_work(viommu->pri_wq, &viommu->pri_work->work);
>  		return 0;
>  	}
>  
> @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> +	if (!viommu->pri_work)
> +		return -ENOMEM;
> +
> +	viommu->pri_work->dev = viommu;
> +
> +	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> +	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> +	if (!viommu->pri_wq)
> +		return -ENOMEM;
> +
>  	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
>  	viommu->last_domain = ~0U;
>  
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index accc3318ce46..53aa88e6b077 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
>  #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
>  #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)

Should go in the previous patch. (I'd also prefer 'privileged' because in
this context 'priv' is easily read as 'private')

Thanks,
Jean

>  #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
>  
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
> -- 
> 2.17.1
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-09-21 16:03     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:03 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> Redirect the incoming page faults to the registered fault handler
> that can take the fault information such as, pasid, page request
> group-id, address and pasid flags.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |  1 +
>  2 files changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index c970f386f031..fd237cad1ce5 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -37,6 +37,13 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +struct viommu_dev_pri_work {
> +	struct work_struct		work;
> +	struct viommu_dev		*dev;
> +	struct virtio_iommu_fault	*vfault;
> +	u32				endpoint;
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -49,6 +56,8 @@ struct viommu_dev {
>  	struct list_head		requests;
>  	void				*evts;
>  	struct list_head		endpoints;
> +	struct workqueue_struct		*pri_wq;
> +	struct viommu_dev_pri_work	*pri_work;

IOPF already has a workqueue, so the driver doesn't need one.
iommu_report_device_fault() should be fast enough to be called from the
event handler.

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return ret;
>  }
>  
> +static void viommu_handle_ppr(struct work_struct *work)
> +{
> +	struct viommu_dev_pri_work *pwork =
> +				container_of(work, struct viommu_dev_pri_work, work);
> +	struct viommu_dev *viommu = pwork->dev;
> +	struct virtio_iommu_fault *vfault = pwork->vfault;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_ep_entry *ep;
> +	struct iommu_fault_event fault_evt = {
> +		.fault.type = IOMMU_FAULT_PAGE_REQ,
> +	};
> +	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> +
> +	u32 flags	= le32_to_cpu(vfault->flags);
> +	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
> +	u32 endpoint	= pwork->endpoint;
> +
> +	memset(prq, 0, sizeof(struct iommu_fault_page_request));

The fault_evt struct is already initialized

> +	prq->addr = le64_to_cpu(vfault->address);
> +
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> +		prq->pasid = le32_to_cpu(vfault->pasid);
> +		prq->grpid = le32_to_cpu(vfault->grpid);
> +	}
> +
> +	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> +		prq->perm |= IOMMU_FAULT_PERM_READ;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> +		prq->perm |= IOMMU_FAULT_PERM_WRITE;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> +		prq->perm |= IOMMU_FAULT_PERM_EXEC;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> +		prq->perm |= IOMMU_FAULT_PERM_PRIV;
> +
> +	list_for_each_entry(ep, &viommu->endpoints, list) {
> +		if (ep->eid == endpoint) {
> +			vdev = ep->vdev;
> +			break;
> +		}
> +	}
> +
> +	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> +	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> +		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> +
> +	if (iommu_report_device_fault(vdev->dev, &fault_evt))
> +		dev_err(vdev->dev, "Couldn't handle page request\n");

An error likely means that nobody registered a fault handler, but we could
display a few more details about the fault that would help debug the
endpoint

> +}
> +
>  static int viommu_fault_handler(struct viommu_dev *viommu,
>  				struct virtio_iommu_fault *fault)
>  {
> @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
>  	u32 pasid	= le32_to_cpu(fault->pasid);
>  
>  	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> -		dev_info(viommu->dev, "Page request fault - unhandled\n");
> +		dev_info_ratelimited(viommu->dev,
> +				     "Page request fault from EP %u\n",
> +				     endpoint);

That's rather for debugging the virtio-iommu driver, so should be
dev_dbg() (or removed entirely)

> +
> +		viommu->pri_work->vfault = fault;
> +		viommu->pri_work->endpoint = endpoint;
> +		queue_work(viommu->pri_wq, &viommu->pri_work->work);
>  		return 0;
>  	}
>  
> @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> +	if (!viommu->pri_work)
> +		return -ENOMEM;
> +
> +	viommu->pri_work->dev = viommu;
> +
> +	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> +	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> +	if (!viommu->pri_wq)
> +		return -ENOMEM;
> +
>  	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
>  	viommu->last_domain = ~0U;
>  
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index accc3318ce46..53aa88e6b077 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
>  #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
>  #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)

Should go in the previous patch. (I'd also prefer 'privileged' because in
this context 'priv' is easily read as 'private')

Thanks,
Jean

>  #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
>  
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
> -- 
> 2.17.1
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-09-21 16:03     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:03 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> Redirect the incoming page faults to the registered fault handler
> that can take the fault information such as, pasid, page request
> group-id, address and pasid flags.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |  1 +
>  2 files changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index c970f386f031..fd237cad1ce5 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -37,6 +37,13 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +struct viommu_dev_pri_work {
> +	struct work_struct		work;
> +	struct viommu_dev		*dev;
> +	struct virtio_iommu_fault	*vfault;
> +	u32				endpoint;
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -49,6 +56,8 @@ struct viommu_dev {
>  	struct list_head		requests;
>  	void				*evts;
>  	struct list_head		endpoints;
> +	struct workqueue_struct		*pri_wq;
> +	struct viommu_dev_pri_work	*pri_work;

IOPF already has a workqueue, so the driver doesn't need one.
iommu_report_device_fault() should be fast enough to be called from the
event handler.

>  
>  	/* Device configuration */
>  	struct iommu_domain_geometry	geometry;
> @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return ret;
>  }
>  
> +static void viommu_handle_ppr(struct work_struct *work)
> +{
> +	struct viommu_dev_pri_work *pwork =
> +				container_of(work, struct viommu_dev_pri_work, work);
> +	struct viommu_dev *viommu = pwork->dev;
> +	struct virtio_iommu_fault *vfault = pwork->vfault;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_ep_entry *ep;
> +	struct iommu_fault_event fault_evt = {
> +		.fault.type = IOMMU_FAULT_PAGE_REQ,
> +	};
> +	struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> +
> +	u32 flags	= le32_to_cpu(vfault->flags);
> +	u32 prq_flags	= le32_to_cpu(vfault->pr_evt_flags);
> +	u32 endpoint	= pwork->endpoint;
> +
> +	memset(prq, 0, sizeof(struct iommu_fault_page_request));

The fault_evt struct is already initialized

> +	prq->addr = le64_to_cpu(vfault->address);
> +
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> +	if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> +		prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> +		prq->pasid = le32_to_cpu(vfault->pasid);
> +		prq->grpid = le32_to_cpu(vfault->grpid);
> +	}
> +
> +	if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> +		prq->perm |= IOMMU_FAULT_PERM_READ;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> +		prq->perm |= IOMMU_FAULT_PERM_WRITE;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> +		prq->perm |= IOMMU_FAULT_PERM_EXEC;
> +	if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> +		prq->perm |= IOMMU_FAULT_PERM_PRIV;
> +
> +	list_for_each_entry(ep, &viommu->endpoints, list) {
> +		if (ep->eid == endpoint) {
> +			vdev = ep->vdev;
> +			break;
> +		}
> +	}
> +
> +	if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> +	    (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> +		prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> +
> +	if (iommu_report_device_fault(vdev->dev, &fault_evt))
> +		dev_err(vdev->dev, "Couldn't handle page request\n");

An error likely means that nobody registered a fault handler, but we could
display a few more details about the fault that would help debug the
endpoint

> +}
> +
>  static int viommu_fault_handler(struct viommu_dev *viommu,
>  				struct virtio_iommu_fault *fault)
>  {
> @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
>  	u32 pasid	= le32_to_cpu(fault->pasid);
>  
>  	if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> -		dev_info(viommu->dev, "Page request fault - unhandled\n");
> +		dev_info_ratelimited(viommu->dev,
> +				     "Page request fault from EP %u\n",
> +				     endpoint);

That's rather for debugging the virtio-iommu driver, so should be
dev_dbg() (or removed entirely)

> +
> +		viommu->pri_work->vfault = fault;
> +		viommu->pri_work->endpoint = endpoint;
> +		queue_work(viommu->pri_wq, &viommu->pri_work->work);
>  		return 0;
>  	}
>  
> @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> +	if (!viommu->pri_work)
> +		return -ENOMEM;
> +
> +	viommu->pri_work->dev = viommu;
> +
> +	INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> +	viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> +	if (!viommu->pri_wq)
> +		return -ENOMEM;
> +
>  	viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
>  	viommu->last_domain = ~0U;
>  
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index accc3318ce46..53aa88e6b077 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
>  #define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
>  #define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
>  #define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_PRIV		(1 << 3)

Should go in the previous patch. (I'd also prefer 'privileged' because in
this context 'priv' is easily read as 'private')

Thanks,
Jean

>  #define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
>  
>  #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 16:04     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:04 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:41PM +0530, Vivek Gautam wrote:
> Add a feature flag to virtio iommu for Shared virtual addressing
> (SVA). This feature would indicate the availablily path for handling
> device page faults, and the provision for sending page response.

In this case the feature should probably be called PAGE_REQUEST or
similar. SVA aggregates PF + PASID + shared page tables

Thanks,
Jean

> Also add necessary methods to enable and disable SVA so that the
> masters can enable the SVA path. This also requires enabling the
> PRI capability on the device.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_iommu.h |   1 +
>  2 files changed, 269 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 3da5f0807711..250c137a211b 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -18,6 +18,7 @@
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
>  #include <linux/pci.h>
> +#include <linux/pci-ats.h>
>  #include <linux/platform_device.h>
>  #include <linux/virtio.h>
>  #include <linux/virtio_config.h>
> @@ -26,6 +27,7 @@
>  
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
> +#include "iommu-sva-lib.h"
>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -37,6 +39,9 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +static DEFINE_MUTEX(sva_lock);
> +static DEFINE_MUTEX(iopf_lock);
> +
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
>  	struct viommu_dev		*dev;
> @@ -71,6 +76,7 @@ struct viommu_dev {
>  
>  	bool				has_map:1;
>  	bool				has_table:1;
> +	bool				has_sva:1;
>  };
>  
>  struct viommu_mapping {
> @@ -124,6 +130,12 @@ struct viommu_endpoint {
>  	void				*pstf;
>  	/* Preferred page table format */
>  	void				*pgtf;
> +
> +	/* sva */
> +	bool				ats_supported;
> +	bool				pri_supported;
> +	bool				sva_enabled;
> +	bool				iopf_enabled;
>  };
>  
>  struct viommu_ep_entry {
> @@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
>  	return 0;
>  }
>  
> +static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return -EINVAL;
> +
> +	if (pci_ats_supported(pdev))
> +		vdev->ats_supported = true;
> +
> +	if (pci_pri_supported(pdev))
> +		vdev->pri_supported = true;
> +
> +	return 0;
> +}
> +
> +static int viommu_enable_pri(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct pci_dev *pdev;
> +
> +	/* Let's allow only 4 requests for PRI right now */
> +	size_t max_inflight_pprs = 4;
> +
> +	if (!vdev->pri_supported || !vdev->ats_supported)
> +		return -ENODEV;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	ret = pci_reset_pri(pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_enable_pri(pdev, max_inflight_pprs);
> +	if (ret) {
> +		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void viommu_disable_pri(struct viommu_endpoint *vdev)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	if (!pdev->pri_enabled)
> +		return;
> +
> +	pci_disable_pri(pdev);
> +}
> +
>  static int viommu_init_queues(struct viommu_dev *viommu)
>  {
>  	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
> @@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	if (ret)
>  		goto out_free_eps;
>  
> +	ret = viommu_init_ats_pri(vdev);
> +	if (ret)
> +		goto out_free_eps;
> +
>  	kfree(probe);
>  	return 0;
>  
> @@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
> +{
> +	/* TODO: support Stall model later */
> +	return vdev->pri_supported;
> +}
> +
> +bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->viommu;
> +
> +	if (!viommu->has_sva)
> +		return false;
> +
> +	return vdev->pasid_bits;
> +}
> +
> +bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
> +{
> +	bool enabled;
> +
> +	mutex_lock(&sva_lock);
> +	enabled = vdev->sva_enabled;
> +	mutex_unlock(&sva_lock);
> +	return enabled;
> +}
> +
> +static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct device *dev = vdev->dev;
> +
> +	if (!viommu_endpoint_iopf_supported(vdev))
> +		return 0;
> +
> +	if (!vdev->iopf_enabled)
> +		return -EINVAL;
> +
> +	if (vdev->pri_supported) {
> +		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
> +		if (ret)
> +			return ret;
> +	} else {
> +		/* No other way to handle io page fault then */
> +		return -EINVAL;
> +	}
> +
> +	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
> +	if (ret)
> +		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +
> +	return ret;
> +}
> +
> +static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +
> +	if (!vdev->iopf_enabled)
> +		return;
> +
> +	iommu_unregister_device_fault_handler(dev);
> +	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +}
> +
> +static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&sva_lock);
> +	ret = viommu_endpoint_sva_enable_iopf(vdev);
> +	if (!ret)
> +		vdev->sva_enabled = true;
> +	mutex_unlock(&sva_lock);
> +
> +	return ret;
> +}
> +
> +static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
> +{
> +	mutex_lock(&sva_lock);
> +	viommu_endpoint_sva_disable_iopf(vdev);
> +	vdev->sva_enabled = false;
> +	mutex_unlock(&sva_lock);
> +
> +	return 0;
> +}
> +
> +int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&iopf_lock);
> +	if (vdev->pri_supported) {
> +		ret = viommu_enable_pri(vdev);
> +		if (ret)
> +			return ret;
> +	}
> +	vdev->iopf_enabled = true;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	if (vdev->sva_enabled)
> +		return -EBUSY;
> +
> +	mutex_lock(&iopf_lock);
> +	viommu_disable_pri(vdev);
> +	vdev->iopf_enabled = false;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +static bool viommu_dev_has_feature(struct device *dev,
> +				   enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_iopf_supported(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_supported(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static bool viommu_dev_feature_enabled(struct device *dev,
> +				       enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return vdev->iopf_enabled;
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_enabled(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static int viommu_dev_enable_feature(struct device *dev,
> +				     enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_has_feature(dev, feat))
> +		return -ENODEV;
> +
> +	if (viommu_dev_feature_enabled(dev, feat))
> +		return -EBUSY;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_enable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_enable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +static int viommu_dev_disable_feature(struct device *dev,
> +				      enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_feature_enabled(dev, feat))
> +		return -EINVAL;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_disable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_disable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
>  static struct iommu_ops viommu_ops = {
>  	.domain_alloc		= viommu_domain_alloc,
>  	.domain_free		= viommu_domain_free,
> @@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
>  	.get_resv_regions	= viommu_get_resv_regions,
>  	.put_resv_regions	= generic_iommu_put_resv_regions,
>  	.of_xlate		= viommu_of_xlate,
> +	.dev_feat_enabled	= viommu_dev_feature_enabled,
> +	.dev_enable_feat	= viommu_dev_enable_feature,
> +	.dev_disable_feat	= viommu_dev_disable_feature,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> @@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -1902,6 +2169,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_PROBE,
>  	VIRTIO_IOMMU_F_MMIO,
>  	VIRTIO_IOMMU_F_ATTACH_TABLE,
> +	VIRTIO_IOMMU_F_SVA,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 53aa88e6b077..88a3db493108 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -17,6 +17,7 @@
>  #define VIRTIO_IOMMU_F_PROBE			4
>  #define VIRTIO_IOMMU_F_MMIO			5
>  #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
> +#define VIRTIO_IOMMU_F_SVA			7
>  
>  struct virtio_iommu_range_64 {
>  	__le64					start;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
@ 2021-09-21 16:04     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:04 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:41PM +0530, Vivek Gautam wrote:
> Add a feature flag to virtio iommu for Shared virtual addressing
> (SVA). This feature would indicate the availablily path for handling
> device page faults, and the provision for sending page response.

In this case the feature should probably be called PAGE_REQUEST or
similar. SVA aggregates PF + PASID + shared page tables

Thanks,
Jean

> Also add necessary methods to enable and disable SVA so that the
> masters can enable the SVA path. This also requires enabling the
> PRI capability on the device.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_iommu.h |   1 +
>  2 files changed, 269 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 3da5f0807711..250c137a211b 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -18,6 +18,7 @@
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
>  #include <linux/pci.h>
> +#include <linux/pci-ats.h>
>  #include <linux/platform_device.h>
>  #include <linux/virtio.h>
>  #include <linux/virtio_config.h>
> @@ -26,6 +27,7 @@
>  
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
> +#include "iommu-sva-lib.h"
>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -37,6 +39,9 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +static DEFINE_MUTEX(sva_lock);
> +static DEFINE_MUTEX(iopf_lock);
> +
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
>  	struct viommu_dev		*dev;
> @@ -71,6 +76,7 @@ struct viommu_dev {
>  
>  	bool				has_map:1;
>  	bool				has_table:1;
> +	bool				has_sva:1;
>  };
>  
>  struct viommu_mapping {
> @@ -124,6 +130,12 @@ struct viommu_endpoint {
>  	void				*pstf;
>  	/* Preferred page table format */
>  	void				*pgtf;
> +
> +	/* sva */
> +	bool				ats_supported;
> +	bool				pri_supported;
> +	bool				sva_enabled;
> +	bool				iopf_enabled;
>  };
>  
>  struct viommu_ep_entry {
> @@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
>  	return 0;
>  }
>  
> +static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return -EINVAL;
> +
> +	if (pci_ats_supported(pdev))
> +		vdev->ats_supported = true;
> +
> +	if (pci_pri_supported(pdev))
> +		vdev->pri_supported = true;
> +
> +	return 0;
> +}
> +
> +static int viommu_enable_pri(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct pci_dev *pdev;
> +
> +	/* Let's allow only 4 requests for PRI right now */
> +	size_t max_inflight_pprs = 4;
> +
> +	if (!vdev->pri_supported || !vdev->ats_supported)
> +		return -ENODEV;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	ret = pci_reset_pri(pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_enable_pri(pdev, max_inflight_pprs);
> +	if (ret) {
> +		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void viommu_disable_pri(struct viommu_endpoint *vdev)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	if (!pdev->pri_enabled)
> +		return;
> +
> +	pci_disable_pri(pdev);
> +}
> +
>  static int viommu_init_queues(struct viommu_dev *viommu)
>  {
>  	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
> @@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	if (ret)
>  		goto out_free_eps;
>  
> +	ret = viommu_init_ats_pri(vdev);
> +	if (ret)
> +		goto out_free_eps;
> +
>  	kfree(probe);
>  	return 0;
>  
> @@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
> +{
> +	/* TODO: support Stall model later */
> +	return vdev->pri_supported;
> +}
> +
> +bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->viommu;
> +
> +	if (!viommu->has_sva)
> +		return false;
> +
> +	return vdev->pasid_bits;
> +}
> +
> +bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
> +{
> +	bool enabled;
> +
> +	mutex_lock(&sva_lock);
> +	enabled = vdev->sva_enabled;
> +	mutex_unlock(&sva_lock);
> +	return enabled;
> +}
> +
> +static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct device *dev = vdev->dev;
> +
> +	if (!viommu_endpoint_iopf_supported(vdev))
> +		return 0;
> +
> +	if (!vdev->iopf_enabled)
> +		return -EINVAL;
> +
> +	if (vdev->pri_supported) {
> +		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
> +		if (ret)
> +			return ret;
> +	} else {
> +		/* No other way to handle io page fault then */
> +		return -EINVAL;
> +	}
> +
> +	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
> +	if (ret)
> +		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +
> +	return ret;
> +}
> +
> +static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +
> +	if (!vdev->iopf_enabled)
> +		return;
> +
> +	iommu_unregister_device_fault_handler(dev);
> +	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +}
> +
> +static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&sva_lock);
> +	ret = viommu_endpoint_sva_enable_iopf(vdev);
> +	if (!ret)
> +		vdev->sva_enabled = true;
> +	mutex_unlock(&sva_lock);
> +
> +	return ret;
> +}
> +
> +static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
> +{
> +	mutex_lock(&sva_lock);
> +	viommu_endpoint_sva_disable_iopf(vdev);
> +	vdev->sva_enabled = false;
> +	mutex_unlock(&sva_lock);
> +
> +	return 0;
> +}
> +
> +int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&iopf_lock);
> +	if (vdev->pri_supported) {
> +		ret = viommu_enable_pri(vdev);
> +		if (ret)
> +			return ret;
> +	}
> +	vdev->iopf_enabled = true;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	if (vdev->sva_enabled)
> +		return -EBUSY;
> +
> +	mutex_lock(&iopf_lock);
> +	viommu_disable_pri(vdev);
> +	vdev->iopf_enabled = false;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +static bool viommu_dev_has_feature(struct device *dev,
> +				   enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_iopf_supported(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_supported(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static bool viommu_dev_feature_enabled(struct device *dev,
> +				       enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return vdev->iopf_enabled;
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_enabled(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static int viommu_dev_enable_feature(struct device *dev,
> +				     enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_has_feature(dev, feat))
> +		return -ENODEV;
> +
> +	if (viommu_dev_feature_enabled(dev, feat))
> +		return -EBUSY;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_enable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_enable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +static int viommu_dev_disable_feature(struct device *dev,
> +				      enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_feature_enabled(dev, feat))
> +		return -EINVAL;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_disable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_disable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
>  static struct iommu_ops viommu_ops = {
>  	.domain_alloc		= viommu_domain_alloc,
>  	.domain_free		= viommu_domain_free,
> @@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
>  	.get_resv_regions	= viommu_get_resv_regions,
>  	.put_resv_regions	= generic_iommu_put_resv_regions,
>  	.of_xlate		= viommu_of_xlate,
> +	.dev_feat_enabled	= viommu_dev_feature_enabled,
> +	.dev_enable_feat	= viommu_dev_enable_feature,
> +	.dev_disable_feat	= viommu_dev_disable_feature,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> @@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -1902,6 +2169,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_PROBE,
>  	VIRTIO_IOMMU_F_MMIO,
>  	VIRTIO_IOMMU_F_ATTACH_TABLE,
> +	VIRTIO_IOMMU_F_SVA,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 53aa88e6b077..88a3db493108 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -17,6 +17,7 @@
>  #define VIRTIO_IOMMU_F_PROBE			4
>  #define VIRTIO_IOMMU_F_MMIO			5
>  #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
> +#define VIRTIO_IOMMU_F_SVA			7
>  
>  struct virtio_iommu_range_64 {
>  	__le64					start;
> -- 
> 2.17.1
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
@ 2021-09-21 16:04     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:04 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:41PM +0530, Vivek Gautam wrote:
> Add a feature flag to virtio iommu for Shared virtual addressing
> (SVA). This feature would indicate the availablily path for handling
> device page faults, and the provision for sending page response.

In this case the feature should probably be called PAGE_REQUEST or
similar. SVA aggregates PF + PASID + shared page tables

Thanks,
Jean

> Also add necessary methods to enable and disable SVA so that the
> masters can enable the SVA path. This also requires enabling the
> PRI capability on the device.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_iommu.h |   1 +
>  2 files changed, 269 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 3da5f0807711..250c137a211b 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -18,6 +18,7 @@
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
>  #include <linux/pci.h>
> +#include <linux/pci-ats.h>
>  #include <linux/platform_device.h>
>  #include <linux/virtio.h>
>  #include <linux/virtio_config.h>
> @@ -26,6 +27,7 @@
>  
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
> +#include "iommu-sva-lib.h"
>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -37,6 +39,9 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +static DEFINE_MUTEX(sva_lock);
> +static DEFINE_MUTEX(iopf_lock);
> +
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
>  	struct viommu_dev		*dev;
> @@ -71,6 +76,7 @@ struct viommu_dev {
>  
>  	bool				has_map:1;
>  	bool				has_table:1;
> +	bool				has_sva:1;
>  };
>  
>  struct viommu_mapping {
> @@ -124,6 +130,12 @@ struct viommu_endpoint {
>  	void				*pstf;
>  	/* Preferred page table format */
>  	void				*pgtf;
> +
> +	/* sva */
> +	bool				ats_supported;
> +	bool				pri_supported;
> +	bool				sva_enabled;
> +	bool				iopf_enabled;
>  };
>  
>  struct viommu_ep_entry {
> @@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
>  	return 0;
>  }
>  
> +static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return -EINVAL;
> +
> +	if (pci_ats_supported(pdev))
> +		vdev->ats_supported = true;
> +
> +	if (pci_pri_supported(pdev))
> +		vdev->pri_supported = true;
> +
> +	return 0;
> +}
> +
> +static int viommu_enable_pri(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct pci_dev *pdev;
> +
> +	/* Let's allow only 4 requests for PRI right now */
> +	size_t max_inflight_pprs = 4;
> +
> +	if (!vdev->pri_supported || !vdev->ats_supported)
> +		return -ENODEV;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	ret = pci_reset_pri(pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_enable_pri(pdev, max_inflight_pprs);
> +	if (ret) {
> +		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void viommu_disable_pri(struct viommu_endpoint *vdev)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	if (!pdev->pri_enabled)
> +		return;
> +
> +	pci_disable_pri(pdev);
> +}
> +
>  static int viommu_init_queues(struct viommu_dev *viommu)
>  {
>  	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
> @@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	if (ret)
>  		goto out_free_eps;
>  
> +	ret = viommu_init_ats_pri(vdev);
> +	if (ret)
> +		goto out_free_eps;
> +
>  	kfree(probe);
>  	return 0;
>  
> @@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
> +{
> +	/* TODO: support Stall model later */
> +	return vdev->pri_supported;
> +}
> +
> +bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->viommu;
> +
> +	if (!viommu->has_sva)
> +		return false;
> +
> +	return vdev->pasid_bits;
> +}
> +
> +bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
> +{
> +	bool enabled;
> +
> +	mutex_lock(&sva_lock);
> +	enabled = vdev->sva_enabled;
> +	mutex_unlock(&sva_lock);
> +	return enabled;
> +}
> +
> +static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct device *dev = vdev->dev;
> +
> +	if (!viommu_endpoint_iopf_supported(vdev))
> +		return 0;
> +
> +	if (!vdev->iopf_enabled)
> +		return -EINVAL;
> +
> +	if (vdev->pri_supported) {
> +		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
> +		if (ret)
> +			return ret;
> +	} else {
> +		/* No other way to handle io page fault then */
> +		return -EINVAL;
> +	}
> +
> +	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
> +	if (ret)
> +		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +
> +	return ret;
> +}
> +
> +static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +
> +	if (!vdev->iopf_enabled)
> +		return;
> +
> +	iommu_unregister_device_fault_handler(dev);
> +	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +}
> +
> +static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&sva_lock);
> +	ret = viommu_endpoint_sva_enable_iopf(vdev);
> +	if (!ret)
> +		vdev->sva_enabled = true;
> +	mutex_unlock(&sva_lock);
> +
> +	return ret;
> +}
> +
> +static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
> +{
> +	mutex_lock(&sva_lock);
> +	viommu_endpoint_sva_disable_iopf(vdev);
> +	vdev->sva_enabled = false;
> +	mutex_unlock(&sva_lock);
> +
> +	return 0;
> +}
> +
> +int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&iopf_lock);
> +	if (vdev->pri_supported) {
> +		ret = viommu_enable_pri(vdev);
> +		if (ret)
> +			return ret;
> +	}
> +	vdev->iopf_enabled = true;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	if (vdev->sva_enabled)
> +		return -EBUSY;
> +
> +	mutex_lock(&iopf_lock);
> +	viommu_disable_pri(vdev);
> +	vdev->iopf_enabled = false;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +static bool viommu_dev_has_feature(struct device *dev,
> +				   enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_iopf_supported(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_supported(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static bool viommu_dev_feature_enabled(struct device *dev,
> +				       enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return vdev->iopf_enabled;
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_enabled(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static int viommu_dev_enable_feature(struct device *dev,
> +				     enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_has_feature(dev, feat))
> +		return -ENODEV;
> +
> +	if (viommu_dev_feature_enabled(dev, feat))
> +		return -EBUSY;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_enable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_enable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +static int viommu_dev_disable_feature(struct device *dev,
> +				      enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_feature_enabled(dev, feat))
> +		return -EINVAL;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_disable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_disable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
>  static struct iommu_ops viommu_ops = {
>  	.domain_alloc		= viommu_domain_alloc,
>  	.domain_free		= viommu_domain_free,
> @@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
>  	.get_resv_regions	= viommu_get_resv_regions,
>  	.put_resv_regions	= generic_iommu_put_resv_regions,
>  	.of_xlate		= viommu_of_xlate,
> +	.dev_feat_enabled	= viommu_dev_feature_enabled,
> +	.dev_enable_feat	= viommu_dev_enable_feature,
> +	.dev_disable_feat	= viommu_dev_disable_feature,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> @@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -1902,6 +2169,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_PROBE,
>  	VIRTIO_IOMMU_F_MMIO,
>  	VIRTIO_IOMMU_F_ATTACH_TABLE,
> +	VIRTIO_IOMMU_F_SVA,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 53aa88e6b077..88a3db493108 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -17,6 +17,7 @@
>  #define VIRTIO_IOMMU_F_PROBE			4
>  #define VIRTIO_IOMMU_F_MMIO			5
>  #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
> +#define VIRTIO_IOMMU_F_SVA			7
>  
>  struct virtio_iommu_range_64 {
>  	__le64					start;
> -- 
> 2.17.1
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks
@ 2021-09-21 16:04     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:04 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:41PM +0530, Vivek Gautam wrote:
> Add a feature flag to virtio iommu for Shared virtual addressing
> (SVA). This feature would indicate the availablily path for handling
> device page faults, and the provision for sending page response.

In this case the feature should probably be called PAGE_REQUEST or
similar. SVA aggregates PF + PASID + shared page tables

Thanks,
Jean

> Also add necessary methods to enable and disable SVA so that the
> masters can enable the SVA path. This also requires enabling the
> PRI capability on the device.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 268 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_iommu.h |   1 +
>  2 files changed, 269 insertions(+)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 3da5f0807711..250c137a211b 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -18,6 +18,7 @@
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
>  #include <linux/pci.h>
> +#include <linux/pci-ats.h>
>  #include <linux/platform_device.h>
>  #include <linux/virtio.h>
>  #include <linux/virtio_config.h>
> @@ -26,6 +27,7 @@
>  
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
> +#include "iommu-sva-lib.h"
>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -37,6 +39,9 @@
>  /* Some architectures need an Address Space ID for each page table */
>  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
> +static DEFINE_MUTEX(sva_lock);
> +static DEFINE_MUTEX(iopf_lock);
> +
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
>  	struct viommu_dev		*dev;
> @@ -71,6 +76,7 @@ struct viommu_dev {
>  
>  	bool				has_map:1;
>  	bool				has_table:1;
> +	bool				has_sva:1;
>  };
>  
>  struct viommu_mapping {
> @@ -124,6 +130,12 @@ struct viommu_endpoint {
>  	void				*pstf;
>  	/* Preferred page table format */
>  	void				*pgtf;
> +
> +	/* sva */
> +	bool				ats_supported;
> +	bool				pri_supported;
> +	bool				sva_enabled;
> +	bool				iopf_enabled;
>  };
>  
>  struct viommu_ep_entry {
> @@ -582,6 +594,64 @@ static int viommu_add_pstf(struct viommu_endpoint *vdev, void *pstf, size_t len)
>  	return 0;
>  }
>  
> +static int viommu_init_ats_pri(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return -EINVAL;
> +
> +	if (pci_ats_supported(pdev))
> +		vdev->ats_supported = true;
> +
> +	if (pci_pri_supported(pdev))
> +		vdev->pri_supported = true;
> +
> +	return 0;
> +}
> +
> +static int viommu_enable_pri(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct pci_dev *pdev;
> +
> +	/* Let's allow only 4 requests for PRI right now */
> +	size_t max_inflight_pprs = 4;
> +
> +	if (!vdev->pri_supported || !vdev->ats_supported)
> +		return -ENODEV;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	ret = pci_reset_pri(pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_enable_pri(pdev, max_inflight_pprs);
> +	if (ret) {
> +		dev_err(vdev->dev, "cannot enable PRI: %d\n", ret);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void viommu_disable_pri(struct viommu_endpoint *vdev)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(vdev->dev))
> +		return;
> +
> +	pdev = to_pci_dev(vdev->dev);
> +
> +	if (!pdev->pri_enabled)
> +		return;
> +
> +	pci_disable_pri(pdev);
> +}
> +
>  static int viommu_init_queues(struct viommu_dev *viommu)
>  {
>  	viommu->iopf_pri = iopf_queue_alloc(dev_name(viommu->dev));
> @@ -684,6 +754,10 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	if (ret)
>  		goto out_free_eps;
>  
> +	ret = viommu_init_ats_pri(vdev);
> +	if (ret)
> +		goto out_free_eps;
> +
>  	kfree(probe);
>  	return 0;
>  
> @@ -1681,6 +1755,194 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
> +{
> +	/* TODO: support Stall model later */
> +	return vdev->pri_supported;
> +}
> +
> +bool viommu_endpoint_sva_supported(struct viommu_endpoint *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->viommu;
> +
> +	if (!viommu->has_sva)
> +		return false;
> +
> +	return vdev->pasid_bits;
> +}
> +
> +bool viommu_endpoint_sva_enabled(struct viommu_endpoint *vdev)
> +{
> +	bool enabled;
> +
> +	mutex_lock(&sva_lock);
> +	enabled = vdev->sva_enabled;
> +	mutex_unlock(&sva_lock);
> +	return enabled;
> +}
> +
> +static int viommu_endpoint_sva_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +	struct device *dev = vdev->dev;
> +
> +	if (!viommu_endpoint_iopf_supported(vdev))
> +		return 0;
> +
> +	if (!vdev->iopf_enabled)
> +		return -EINVAL;
> +
> +	if (vdev->pri_supported) {
> +		ret = iopf_queue_add_device(vdev->viommu->iopf_pri, dev);
> +		if (ret)
> +			return ret;
> +	} else {
> +		/* No other way to handle io page fault then */
> +		return -EINVAL;
> +	}
> +
> +	ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
> +	if (ret)
> +		iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +
> +	return ret;
> +}
> +
> +static void viommu_endpoint_sva_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	struct device *dev = vdev->dev;
> +
> +	if (!vdev->iopf_enabled)
> +		return;
> +
> +	iommu_unregister_device_fault_handler(dev);
> +	iopf_queue_remove_device(vdev->viommu->iopf_pri, dev);
> +}
> +
> +static int viommu_endpoint_enable_sva(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&sva_lock);
> +	ret = viommu_endpoint_sva_enable_iopf(vdev);
> +	if (!ret)
> +		vdev->sva_enabled = true;
> +	mutex_unlock(&sva_lock);
> +
> +	return ret;
> +}
> +
> +static int viommu_endpoint_disable_sva(struct viommu_endpoint *vdev)
> +{
> +	mutex_lock(&sva_lock);
> +	viommu_endpoint_sva_disable_iopf(vdev);
> +	vdev->sva_enabled = false;
> +	mutex_unlock(&sva_lock);
> +
> +	return 0;
> +}
> +
> +int viommu_endpoint_enable_iopf(struct viommu_endpoint *vdev)
> +{
> +	int ret;
> +
> +	mutex_lock(&iopf_lock);
> +	if (vdev->pri_supported) {
> +		ret = viommu_enable_pri(vdev);
> +		if (ret)
> +			return ret;
> +	}
> +	vdev->iopf_enabled = true;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +int viommu_endpoint_disable_iopf(struct viommu_endpoint *vdev)
> +{
> +	if (vdev->sva_enabled)
> +		return -EBUSY;
> +
> +	mutex_lock(&iopf_lock);
> +	viommu_disable_pri(vdev);
> +	vdev->iopf_enabled = false;
> +	mutex_unlock(&iopf_lock);
> +	return 0;
> +}
> +
> +static bool viommu_dev_has_feature(struct device *dev,
> +				   enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_iopf_supported(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_supported(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static bool viommu_dev_feature_enabled(struct device *dev,
> +				       enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!vdev)
> +		return false;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return vdev->iopf_enabled;
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_sva_enabled(vdev);
> +	default:
> +		return false;
> +	}
> +}
> +
> +static int viommu_dev_enable_feature(struct device *dev,
> +				     enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_has_feature(dev, feat))
> +		return -ENODEV;
> +
> +	if (viommu_dev_feature_enabled(dev, feat))
> +		return -EBUSY;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_enable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_enable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +static int viommu_dev_disable_feature(struct device *dev,
> +				      enum iommu_dev_features feat)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> +	if (!viommu_dev_feature_enabled(dev, feat))
> +		return -EINVAL;
> +
> +	switch (feat) {
> +	case IOMMU_DEV_FEAT_IOPF:
> +		return viommu_endpoint_disable_iopf(vdev);
> +	case IOMMU_DEV_FEAT_SVA:
> +		return viommu_endpoint_disable_sva(vdev);
> +	default:
> +		return -EINVAL;
> +	}
> +}
>  static struct iommu_ops viommu_ops = {
>  	.domain_alloc		= viommu_domain_alloc,
>  	.domain_free		= viommu_domain_free,
> @@ -1695,6 +1957,9 @@ static struct iommu_ops viommu_ops = {
>  	.get_resv_regions	= viommu_get_resv_regions,
>  	.put_resv_regions	= generic_iommu_put_resv_regions,
>  	.of_xlate		= viommu_of_xlate,
> +	.dev_feat_enabled	= viommu_dev_feature_enabled,
> +	.dev_enable_feat	= viommu_dev_enable_feature,
> +	.dev_disable_feat	= viommu_dev_disable_feature,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> @@ -1811,6 +2076,8 @@ static int viommu_probe(struct virtio_device *vdev)
>  		goto err_free_vqs;
>  	}
>  
> +	viommu->has_sva = virtio_has_feature(vdev, VIRTIO_IOMMU_F_SVA);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -1902,6 +2169,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_PROBE,
>  	VIRTIO_IOMMU_F_MMIO,
>  	VIRTIO_IOMMU_F_ATTACH_TABLE,
> +	VIRTIO_IOMMU_F_SVA,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 53aa88e6b077..88a3db493108 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -17,6 +17,7 @@
>  #define VIRTIO_IOMMU_F_PROBE			4
>  #define VIRTIO_IOMMU_F_MMIO			5
>  #define VIRTIO_IOMMU_F_ATTACH_TABLE		6
> +#define VIRTIO_IOMMU_F_SVA			7
>  
>  struct virtio_iommu_range_64 {
>  	__le64					start;
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 16:07     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:07 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
> start using them for arm-smmu-v3-sva implementation.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>  .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>  4 files changed, 73 insertions(+), 96 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> index 537b7c784d40..b87829796596 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>   * descriptor is using it, try to replace it.
>   */
>  static struct arm_smmu_ctx_desc *
> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +		    struct xarray *xa, u16 asid, u32 asid_bits)

xa and asid_bits could be stored in some arch-specific section of the
iommu_pasid_table struct. Other table drivers wouldn't need those
arguments.

More a comment for the parent series: it may be clearer to give a
different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
Reading this patch I'm a little confused by what belongs in the IOMMU
driver and what is done by this library. (I also keep reading 'tbl' as
'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
space)

>  {
>  	int ret;
>  	u32 new_asid;
>  	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_device *smmu;
> -	struct arm_smmu_domain *smmu_domain;
> -	struct iommu_pasid_table *tbl;
>  
> -	cd = xa_load(&arm_smmu_asid_xa, asid);
> +	cd = xa_load(xa, asid);
>  	if (!cd)
>  		return NULL;
>  
> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  		return cd;
>  	}
>  
> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
> -	smmu = smmu_domain->smmu;
> -	tbl = smmu_domain->tbl;
> -
> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
> +		       GFP_KERNEL);
>  	if (ret)
>  		return ERR_PTR(-ENOSPC);
>  	/*
> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  	 * be some overlap between use of both ASIDs, until we invalidate the
>  	 * TLB.
>  	 */
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>  	if (ret)
>  		return ERR_PTR(-ENOSYS);
>  
>  	/* Invalidate TLB entries previously associated with that context */
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>  
> -	xa_erase(&arm_smmu_asid_xa, asid);
> +	xa_erase(xa, asid);
>  	return NULL;
>  }
>  
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> +static struct iommu_psdtable_mmu_notifier *
> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +			 struct xarray *xa, u32 asid_bits)
>  {
>  	u16 asid;
>  	int err = 0;
>  	u64 tcr, par, reg;
>  	struct arm_smmu_ctx_desc *cd;
>  	struct arm_smmu_ctx_desc *ret = NULL;
> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>  
>  	asid = arm64_mm_context_get(mm);
>  	if (!asid)
>  		return ERR_PTR(-ESRCH);
>  
> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
> +	if (!pst_mn) {
> +		err = -ENOMEM;
> +		goto out_put_context;
> +	}
> +
>  	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>  	if (!cd) {
>  		err = -ENOMEM;
> -		goto out_put_context;
> +		goto out_free_mn;
>  	}
>  
>  	refcount_set(&cd->refs, 1);
>  
> -	mutex_lock(&arm_smmu_asid_lock);
> -	ret = arm_smmu_share_asid(mm, asid);
> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>  	if (ret) {
> -		mutex_unlock(&arm_smmu_asid_lock);
>  		goto out_free_cd;
>  	}
>  
> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
> -	mutex_unlock(&arm_smmu_asid_lock);
> -
> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>  	if (err)
>  		goto out_free_asid;
>  
> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>  	cd->asid = asid;
>  	cd->mm = mm;
>  
> -	return cd;
> +	pst_mn->vendor.cd = cd;
> +	return pst_mn;
>  
>  out_free_asid:
> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
> +	arm_smmu_free_asid(xa, cd);
>  out_free_cd:
>  	kfree(cd);
> +out_free_mn:
> +	kfree(pst_mn);
>  out_put_context:
>  	arm64_mm_context_put(mm);
>  	return err < 0 ? ERR_PTR(err) : ret;
>  }
>  
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd)
> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> +				    struct xarray *xa, void *cookie)

Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
alloc_shared() returns?

>  {
> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
> +	struct arm_smmu_ctx_desc *cd = cookie;
> +
> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>  		/* Unpin ASID */
>  		arm64_mm_context_put(cd->mm);
>  		kfree(cd);
> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>  }
>  
>  struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
> -	.alloc	 = arm_smmu_alloc_cd_tables,
> -	.free	 = arm_smmu_free_cd_tables,
> -	.prepare = arm_smmu_prepare_cd,
> -	.write	 = arm_smmu_write_ctx_desc,
> -	.free_asid = arm_smmu_free_asid,
> +	.alloc		= arm_smmu_alloc_cd_tables,
> +	.free		= arm_smmu_free_cd_tables,
> +	.prepare	= arm_smmu_prepare_cd,
> +	.write		= arm_smmu_write_ctx_desc,
> +	.free_asid	= arm_smmu_free_asid,
> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
> +	.free_shared	= arm_smmu_free_shared_cd,
>  };
>  
>  struct iommu_pasid_table *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> index da35d4cc0c1e..ef28d0c409da 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> @@ -13,23 +13,12 @@
>  #include "../../io-pgtable-arm.h"
>  #include "../../iommu-pasid-table.h"
>  
> -struct arm_smmu_mmu_notifier {
> -	struct mmu_notifier		mn;
> -	struct arm_smmu_ctx_desc	*cd;
> -	bool				cleared;
> -	refcount_t			refs;
> -	struct list_head		list;
> -	struct arm_smmu_domain		*domain;
> -};
> -
> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
> -
>  struct arm_smmu_bond {
> -	struct iommu_sva		sva;
> -	struct mm_struct		*mm;
> -	struct arm_smmu_mmu_notifier	*smmu_mn;
> -	struct list_head		list;
> -	refcount_t			refs;
> +	struct iommu_sva			sva;
> +	struct mm_struct			*mm;
> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
> +	struct list_head			list;
> +	refcount_t				refs;
>  };
>  
>  #define sva_to_bond(handle) \
> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>  					 struct mm_struct *mm,
>  					 unsigned long start, unsigned long end)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	size_t size = end - start + 1;
>  
>  	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>  					    PAGE_SIZE, false, smmu_domain);
>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>  }
>  
>  static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	mutex_lock(&sva_lock);
> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  	 */
>  	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);

Another comment for the parent series: I'd prefer making this a
"iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
driver and library. Because that won't work if the SMMU driver is a module
or disabled - build of virtio-iommu will probably fail since quiet_cd will
be undefined. We could make the library built-in and move quiet_cd there,
but an explicit library call seems cleaner.

>  
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);

We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
should only be called from the library. But with the previous comment,
this invalidation would move to the library.

>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>  
>  	smmu_mn->cleared = true;
> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  
>  static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>  {
> -	kfree(mn_to_smmu(mn));
> +	kfree(mn_to_pstiommu(mn));
>  }
>  
>  static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>  };
>  
>  /* Allocate or get existing MMU notifier for this {domain, mm} pair */
> -static struct arm_smmu_mmu_notifier *
> +static struct iommu_psdtable_mmu_notifier *
>  arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>  			  struct mm_struct *mm)
>  {
>  	int ret;
> -	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_mmu_notifier *smmu_mn;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>  		if (smmu_mn->mn.mm == mm) {
>  			refcount_inc(&smmu_mn->refs);
>  			return smmu_mn;
>  		}
>  	}
>  
> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
> -	if (IS_ERR(cd))
> -		return ERR_CAST(cd);
> -
> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
> -	if (!smmu_mn) {
> -		ret = -ENOMEM;
> -		goto err_free_cd;
> -	}
> +	mutex_lock(&arm_smmu_asid_lock);
> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
> +					      smmu->asid_bits);
> +	mutex_unlock(&arm_smmu_asid_lock);
> +	if (IS_ERR(smmu_mn))
> +		return ERR_CAST(smmu_mn);
>  
>  	refcount_set(&smmu_mn->refs, 1);
> -	smmu_mn->cd = cd;
> -	smmu_mn->domain = smmu_domain;
> +	smmu_mn->cookie = smmu_domain;
>  	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>  
>  	ret = mmu_notifier_register(&smmu_mn->mn, mm);
> -	if (ret) {
> -		kfree(smmu_mn);
> +	if (ret)
>  		goto err_free_cd;
> -	}
>  
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   smmu_mn->vendor.cd);

Pass smmu_mn here, and let the library code get the cd (to allow for other
pasid table implementations)

>  	if (ret)
>  		goto err_put_notifier;
>  
> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);

I'd keep the mmu_notifiers list in domain if the library doesn't use it
for anything.

That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
could move to the library, since the virtio-iommu version seems to be the
same. They probably belong in iommu-sva-lib but we can revisit that when
there are more users.

Thanks,
Jean

>  	return smmu_mn;
>  
>  err_put_notifier:
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
>  err_free_cd:
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
>  	return ERR_PTR(ret);
>  }
>  
> -static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
> +static void
> +arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
>  {
>  	struct mm_struct *mm = smmu_mn->mn.mm;
> -	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	if (!refcount_dec_and_test(&smmu_mn->refs))
> @@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
>  
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
>  }
>  
>  static struct iommu_sva *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index bbc4dc75de72..e55567b4d2f4 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>  	mutex_init(&smmu_domain->init_mutex);
>  	INIT_LIST_HEAD(&smmu_domain->devices);
>  	spin_lock_init(&smmu_domain->devices_lock);
> -	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
>  
>  	return &smmu_domain->domain;
>  }
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index 13ff024ab0d5..8a689b4316ac 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -737,8 +737,6 @@ struct arm_smmu_domain {
>  
>  	struct list_head		devices;
>  	spinlock_t			devices_lock;
> -
> -	struct list_head		mmu_notifiers;
>  };
>  
>  static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
> @@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
>  void arm_smmu_sva_unbind(struct iommu_sva *handle);
>  u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
>  void arm_smmu_sva_notifier_synchronize(void);
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd);
>  #else /* CONFIG_ARM_SMMU_V3_SVA */
>  static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
>  {
> @@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
>  }
>  
>  static inline void arm_smmu_sva_notifier_synchronize(void) {}
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> -{
> -	return ERR_PTR(-EINVAL);
> -}
> -
> -static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -					   struct arm_smmu_ctx_desc *cd) {}
>  #endif /* CONFIG_ARM_SMMU_V3_SVA */
>  #endif /* _ARM_SMMU_V3_H */
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-09-21 16:07     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:07 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
> start using them for arm-smmu-v3-sva implementation.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>  .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>  4 files changed, 73 insertions(+), 96 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> index 537b7c784d40..b87829796596 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>   * descriptor is using it, try to replace it.
>   */
>  static struct arm_smmu_ctx_desc *
> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +		    struct xarray *xa, u16 asid, u32 asid_bits)

xa and asid_bits could be stored in some arch-specific section of the
iommu_pasid_table struct. Other table drivers wouldn't need those
arguments.

More a comment for the parent series: it may be clearer to give a
different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
Reading this patch I'm a little confused by what belongs in the IOMMU
driver and what is done by this library. (I also keep reading 'tbl' as
'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
space)

>  {
>  	int ret;
>  	u32 new_asid;
>  	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_device *smmu;
> -	struct arm_smmu_domain *smmu_domain;
> -	struct iommu_pasid_table *tbl;
>  
> -	cd = xa_load(&arm_smmu_asid_xa, asid);
> +	cd = xa_load(xa, asid);
>  	if (!cd)
>  		return NULL;
>  
> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  		return cd;
>  	}
>  
> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
> -	smmu = smmu_domain->smmu;
> -	tbl = smmu_domain->tbl;
> -
> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
> +		       GFP_KERNEL);
>  	if (ret)
>  		return ERR_PTR(-ENOSPC);
>  	/*
> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  	 * be some overlap between use of both ASIDs, until we invalidate the
>  	 * TLB.
>  	 */
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>  	if (ret)
>  		return ERR_PTR(-ENOSYS);
>  
>  	/* Invalidate TLB entries previously associated with that context */
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>  
> -	xa_erase(&arm_smmu_asid_xa, asid);
> +	xa_erase(xa, asid);
>  	return NULL;
>  }
>  
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> +static struct iommu_psdtable_mmu_notifier *
> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +			 struct xarray *xa, u32 asid_bits)
>  {
>  	u16 asid;
>  	int err = 0;
>  	u64 tcr, par, reg;
>  	struct arm_smmu_ctx_desc *cd;
>  	struct arm_smmu_ctx_desc *ret = NULL;
> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>  
>  	asid = arm64_mm_context_get(mm);
>  	if (!asid)
>  		return ERR_PTR(-ESRCH);
>  
> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
> +	if (!pst_mn) {
> +		err = -ENOMEM;
> +		goto out_put_context;
> +	}
> +
>  	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>  	if (!cd) {
>  		err = -ENOMEM;
> -		goto out_put_context;
> +		goto out_free_mn;
>  	}
>  
>  	refcount_set(&cd->refs, 1);
>  
> -	mutex_lock(&arm_smmu_asid_lock);
> -	ret = arm_smmu_share_asid(mm, asid);
> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>  	if (ret) {
> -		mutex_unlock(&arm_smmu_asid_lock);
>  		goto out_free_cd;
>  	}
>  
> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
> -	mutex_unlock(&arm_smmu_asid_lock);
> -
> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>  	if (err)
>  		goto out_free_asid;
>  
> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>  	cd->asid = asid;
>  	cd->mm = mm;
>  
> -	return cd;
> +	pst_mn->vendor.cd = cd;
> +	return pst_mn;
>  
>  out_free_asid:
> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
> +	arm_smmu_free_asid(xa, cd);
>  out_free_cd:
>  	kfree(cd);
> +out_free_mn:
> +	kfree(pst_mn);
>  out_put_context:
>  	arm64_mm_context_put(mm);
>  	return err < 0 ? ERR_PTR(err) : ret;
>  }
>  
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd)
> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> +				    struct xarray *xa, void *cookie)

Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
alloc_shared() returns?

>  {
> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
> +	struct arm_smmu_ctx_desc *cd = cookie;
> +
> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>  		/* Unpin ASID */
>  		arm64_mm_context_put(cd->mm);
>  		kfree(cd);
> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>  }
>  
>  struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
> -	.alloc	 = arm_smmu_alloc_cd_tables,
> -	.free	 = arm_smmu_free_cd_tables,
> -	.prepare = arm_smmu_prepare_cd,
> -	.write	 = arm_smmu_write_ctx_desc,
> -	.free_asid = arm_smmu_free_asid,
> +	.alloc		= arm_smmu_alloc_cd_tables,
> +	.free		= arm_smmu_free_cd_tables,
> +	.prepare	= arm_smmu_prepare_cd,
> +	.write		= arm_smmu_write_ctx_desc,
> +	.free_asid	= arm_smmu_free_asid,
> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
> +	.free_shared	= arm_smmu_free_shared_cd,
>  };
>  
>  struct iommu_pasid_table *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> index da35d4cc0c1e..ef28d0c409da 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> @@ -13,23 +13,12 @@
>  #include "../../io-pgtable-arm.h"
>  #include "../../iommu-pasid-table.h"
>  
> -struct arm_smmu_mmu_notifier {
> -	struct mmu_notifier		mn;
> -	struct arm_smmu_ctx_desc	*cd;
> -	bool				cleared;
> -	refcount_t			refs;
> -	struct list_head		list;
> -	struct arm_smmu_domain		*domain;
> -};
> -
> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
> -
>  struct arm_smmu_bond {
> -	struct iommu_sva		sva;
> -	struct mm_struct		*mm;
> -	struct arm_smmu_mmu_notifier	*smmu_mn;
> -	struct list_head		list;
> -	refcount_t			refs;
> +	struct iommu_sva			sva;
> +	struct mm_struct			*mm;
> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
> +	struct list_head			list;
> +	refcount_t				refs;
>  };
>  
>  #define sva_to_bond(handle) \
> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>  					 struct mm_struct *mm,
>  					 unsigned long start, unsigned long end)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	size_t size = end - start + 1;
>  
>  	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>  					    PAGE_SIZE, false, smmu_domain);
>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>  }
>  
>  static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	mutex_lock(&sva_lock);
> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  	 */
>  	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);

Another comment for the parent series: I'd prefer making this a
"iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
driver and library. Because that won't work if the SMMU driver is a module
or disabled - build of virtio-iommu will probably fail since quiet_cd will
be undefined. We could make the library built-in and move quiet_cd there,
but an explicit library call seems cleaner.

>  
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);

We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
should only be called from the library. But with the previous comment,
this invalidation would move to the library.

>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>  
>  	smmu_mn->cleared = true;
> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  
>  static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>  {
> -	kfree(mn_to_smmu(mn));
> +	kfree(mn_to_pstiommu(mn));
>  }
>  
>  static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>  };
>  
>  /* Allocate or get existing MMU notifier for this {domain, mm} pair */
> -static struct arm_smmu_mmu_notifier *
> +static struct iommu_psdtable_mmu_notifier *
>  arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>  			  struct mm_struct *mm)
>  {
>  	int ret;
> -	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_mmu_notifier *smmu_mn;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>  		if (smmu_mn->mn.mm == mm) {
>  			refcount_inc(&smmu_mn->refs);
>  			return smmu_mn;
>  		}
>  	}
>  
> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
> -	if (IS_ERR(cd))
> -		return ERR_CAST(cd);
> -
> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
> -	if (!smmu_mn) {
> -		ret = -ENOMEM;
> -		goto err_free_cd;
> -	}
> +	mutex_lock(&arm_smmu_asid_lock);
> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
> +					      smmu->asid_bits);
> +	mutex_unlock(&arm_smmu_asid_lock);
> +	if (IS_ERR(smmu_mn))
> +		return ERR_CAST(smmu_mn);
>  
>  	refcount_set(&smmu_mn->refs, 1);
> -	smmu_mn->cd = cd;
> -	smmu_mn->domain = smmu_domain;
> +	smmu_mn->cookie = smmu_domain;
>  	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>  
>  	ret = mmu_notifier_register(&smmu_mn->mn, mm);
> -	if (ret) {
> -		kfree(smmu_mn);
> +	if (ret)
>  		goto err_free_cd;
> -	}
>  
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   smmu_mn->vendor.cd);

Pass smmu_mn here, and let the library code get the cd (to allow for other
pasid table implementations)

>  	if (ret)
>  		goto err_put_notifier;
>  
> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);

I'd keep the mmu_notifiers list in domain if the library doesn't use it
for anything.

That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
could move to the library, since the virtio-iommu version seems to be the
same. They probably belong in iommu-sva-lib but we can revisit that when
there are more users.

Thanks,
Jean

>  	return smmu_mn;
>  
>  err_put_notifier:
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
>  err_free_cd:
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
>  	return ERR_PTR(ret);
>  }
>  
> -static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
> +static void
> +arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
>  {
>  	struct mm_struct *mm = smmu_mn->mn.mm;
> -	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	if (!refcount_dec_and_test(&smmu_mn->refs))
> @@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
>  
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
>  }
>  
>  static struct iommu_sva *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index bbc4dc75de72..e55567b4d2f4 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>  	mutex_init(&smmu_domain->init_mutex);
>  	INIT_LIST_HEAD(&smmu_domain->devices);
>  	spin_lock_init(&smmu_domain->devices_lock);
> -	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
>  
>  	return &smmu_domain->domain;
>  }
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index 13ff024ab0d5..8a689b4316ac 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -737,8 +737,6 @@ struct arm_smmu_domain {
>  
>  	struct list_head		devices;
>  	spinlock_t			devices_lock;
> -
> -	struct list_head		mmu_notifiers;
>  };
>  
>  static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
> @@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
>  void arm_smmu_sva_unbind(struct iommu_sva *handle);
>  u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
>  void arm_smmu_sva_notifier_synchronize(void);
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd);
>  #else /* CONFIG_ARM_SMMU_V3_SVA */
>  static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
>  {
> @@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
>  }
>  
>  static inline void arm_smmu_sva_notifier_synchronize(void) {}
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> -{
> -	return ERR_PTR(-EINVAL);
> -}
> -
> -static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -					   struct arm_smmu_ctx_desc *cd) {}
>  #endif /* CONFIG_ARM_SMMU_V3_SVA */
>  #endif /* _ARM_SMMU_V3_H */
> -- 
> 2.17.1
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-09-21 16:07     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:07 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
> start using them for arm-smmu-v3-sva implementation.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>  .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>  4 files changed, 73 insertions(+), 96 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> index 537b7c784d40..b87829796596 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>   * descriptor is using it, try to replace it.
>   */
>  static struct arm_smmu_ctx_desc *
> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +		    struct xarray *xa, u16 asid, u32 asid_bits)

xa and asid_bits could be stored in some arch-specific section of the
iommu_pasid_table struct. Other table drivers wouldn't need those
arguments.

More a comment for the parent series: it may be clearer to give a
different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
Reading this patch I'm a little confused by what belongs in the IOMMU
driver and what is done by this library. (I also keep reading 'tbl' as
'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
space)

>  {
>  	int ret;
>  	u32 new_asid;
>  	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_device *smmu;
> -	struct arm_smmu_domain *smmu_domain;
> -	struct iommu_pasid_table *tbl;
>  
> -	cd = xa_load(&arm_smmu_asid_xa, asid);
> +	cd = xa_load(xa, asid);
>  	if (!cd)
>  		return NULL;
>  
> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  		return cd;
>  	}
>  
> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
> -	smmu = smmu_domain->smmu;
> -	tbl = smmu_domain->tbl;
> -
> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
> +		       GFP_KERNEL);
>  	if (ret)
>  		return ERR_PTR(-ENOSPC);
>  	/*
> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  	 * be some overlap between use of both ASIDs, until we invalidate the
>  	 * TLB.
>  	 */
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>  	if (ret)
>  		return ERR_PTR(-ENOSYS);
>  
>  	/* Invalidate TLB entries previously associated with that context */
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>  
> -	xa_erase(&arm_smmu_asid_xa, asid);
> +	xa_erase(xa, asid);
>  	return NULL;
>  }
>  
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> +static struct iommu_psdtable_mmu_notifier *
> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +			 struct xarray *xa, u32 asid_bits)
>  {
>  	u16 asid;
>  	int err = 0;
>  	u64 tcr, par, reg;
>  	struct arm_smmu_ctx_desc *cd;
>  	struct arm_smmu_ctx_desc *ret = NULL;
> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>  
>  	asid = arm64_mm_context_get(mm);
>  	if (!asid)
>  		return ERR_PTR(-ESRCH);
>  
> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
> +	if (!pst_mn) {
> +		err = -ENOMEM;
> +		goto out_put_context;
> +	}
> +
>  	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>  	if (!cd) {
>  		err = -ENOMEM;
> -		goto out_put_context;
> +		goto out_free_mn;
>  	}
>  
>  	refcount_set(&cd->refs, 1);
>  
> -	mutex_lock(&arm_smmu_asid_lock);
> -	ret = arm_smmu_share_asid(mm, asid);
> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>  	if (ret) {
> -		mutex_unlock(&arm_smmu_asid_lock);
>  		goto out_free_cd;
>  	}
>  
> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
> -	mutex_unlock(&arm_smmu_asid_lock);
> -
> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>  	if (err)
>  		goto out_free_asid;
>  
> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>  	cd->asid = asid;
>  	cd->mm = mm;
>  
> -	return cd;
> +	pst_mn->vendor.cd = cd;
> +	return pst_mn;
>  
>  out_free_asid:
> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
> +	arm_smmu_free_asid(xa, cd);
>  out_free_cd:
>  	kfree(cd);
> +out_free_mn:
> +	kfree(pst_mn);
>  out_put_context:
>  	arm64_mm_context_put(mm);
>  	return err < 0 ? ERR_PTR(err) : ret;
>  }
>  
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd)
> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> +				    struct xarray *xa, void *cookie)

Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
alloc_shared() returns?

>  {
> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
> +	struct arm_smmu_ctx_desc *cd = cookie;
> +
> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>  		/* Unpin ASID */
>  		arm64_mm_context_put(cd->mm);
>  		kfree(cd);
> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>  }
>  
>  struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
> -	.alloc	 = arm_smmu_alloc_cd_tables,
> -	.free	 = arm_smmu_free_cd_tables,
> -	.prepare = arm_smmu_prepare_cd,
> -	.write	 = arm_smmu_write_ctx_desc,
> -	.free_asid = arm_smmu_free_asid,
> +	.alloc		= arm_smmu_alloc_cd_tables,
> +	.free		= arm_smmu_free_cd_tables,
> +	.prepare	= arm_smmu_prepare_cd,
> +	.write		= arm_smmu_write_ctx_desc,
> +	.free_asid	= arm_smmu_free_asid,
> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
> +	.free_shared	= arm_smmu_free_shared_cd,
>  };
>  
>  struct iommu_pasid_table *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> index da35d4cc0c1e..ef28d0c409da 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> @@ -13,23 +13,12 @@
>  #include "../../io-pgtable-arm.h"
>  #include "../../iommu-pasid-table.h"
>  
> -struct arm_smmu_mmu_notifier {
> -	struct mmu_notifier		mn;
> -	struct arm_smmu_ctx_desc	*cd;
> -	bool				cleared;
> -	refcount_t			refs;
> -	struct list_head		list;
> -	struct arm_smmu_domain		*domain;
> -};
> -
> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
> -
>  struct arm_smmu_bond {
> -	struct iommu_sva		sva;
> -	struct mm_struct		*mm;
> -	struct arm_smmu_mmu_notifier	*smmu_mn;
> -	struct list_head		list;
> -	refcount_t			refs;
> +	struct iommu_sva			sva;
> +	struct mm_struct			*mm;
> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
> +	struct list_head			list;
> +	refcount_t				refs;
>  };
>  
>  #define sva_to_bond(handle) \
> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>  					 struct mm_struct *mm,
>  					 unsigned long start, unsigned long end)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	size_t size = end - start + 1;
>  
>  	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>  					    PAGE_SIZE, false, smmu_domain);
>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>  }
>  
>  static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	mutex_lock(&sva_lock);
> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  	 */
>  	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);

Another comment for the parent series: I'd prefer making this a
"iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
driver and library. Because that won't work if the SMMU driver is a module
or disabled - build of virtio-iommu will probably fail since quiet_cd will
be undefined. We could make the library built-in and move quiet_cd there,
but an explicit library call seems cleaner.

>  
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);

We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
should only be called from the library. But with the previous comment,
this invalidation would move to the library.

>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>  
>  	smmu_mn->cleared = true;
> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  
>  static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>  {
> -	kfree(mn_to_smmu(mn));
> +	kfree(mn_to_pstiommu(mn));
>  }
>  
>  static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>  };
>  
>  /* Allocate or get existing MMU notifier for this {domain, mm} pair */
> -static struct arm_smmu_mmu_notifier *
> +static struct iommu_psdtable_mmu_notifier *
>  arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>  			  struct mm_struct *mm)
>  {
>  	int ret;
> -	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_mmu_notifier *smmu_mn;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>  		if (smmu_mn->mn.mm == mm) {
>  			refcount_inc(&smmu_mn->refs);
>  			return smmu_mn;
>  		}
>  	}
>  
> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
> -	if (IS_ERR(cd))
> -		return ERR_CAST(cd);
> -
> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
> -	if (!smmu_mn) {
> -		ret = -ENOMEM;
> -		goto err_free_cd;
> -	}
> +	mutex_lock(&arm_smmu_asid_lock);
> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
> +					      smmu->asid_bits);
> +	mutex_unlock(&arm_smmu_asid_lock);
> +	if (IS_ERR(smmu_mn))
> +		return ERR_CAST(smmu_mn);
>  
>  	refcount_set(&smmu_mn->refs, 1);
> -	smmu_mn->cd = cd;
> -	smmu_mn->domain = smmu_domain;
> +	smmu_mn->cookie = smmu_domain;
>  	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>  
>  	ret = mmu_notifier_register(&smmu_mn->mn, mm);
> -	if (ret) {
> -		kfree(smmu_mn);
> +	if (ret)
>  		goto err_free_cd;
> -	}
>  
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   smmu_mn->vendor.cd);

Pass smmu_mn here, and let the library code get the cd (to allow for other
pasid table implementations)

>  	if (ret)
>  		goto err_put_notifier;
>  
> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);

I'd keep the mmu_notifiers list in domain if the library doesn't use it
for anything.

That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
could move to the library, since the virtio-iommu version seems to be the
same. They probably belong in iommu-sva-lib but we can revisit that when
there are more users.

Thanks,
Jean

>  	return smmu_mn;
>  
>  err_put_notifier:
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
>  err_free_cd:
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
>  	return ERR_PTR(ret);
>  }
>  
> -static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
> +static void
> +arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
>  {
>  	struct mm_struct *mm = smmu_mn->mn.mm;
> -	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	if (!refcount_dec_and_test(&smmu_mn->refs))
> @@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
>  
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
>  }
>  
>  static struct iommu_sva *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index bbc4dc75de72..e55567b4d2f4 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>  	mutex_init(&smmu_domain->init_mutex);
>  	INIT_LIST_HEAD(&smmu_domain->devices);
>  	spin_lock_init(&smmu_domain->devices_lock);
> -	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
>  
>  	return &smmu_domain->domain;
>  }
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index 13ff024ab0d5..8a689b4316ac 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -737,8 +737,6 @@ struct arm_smmu_domain {
>  
>  	struct list_head		devices;
>  	spinlock_t			devices_lock;
> -
> -	struct list_head		mmu_notifiers;
>  };
>  
>  static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
> @@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
>  void arm_smmu_sva_unbind(struct iommu_sva *handle);
>  u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
>  void arm_smmu_sva_notifier_synchronize(void);
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd);
>  #else /* CONFIG_ARM_SMMU_V3_SVA */
>  static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
>  {
> @@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
>  }
>  
>  static inline void arm_smmu_sva_notifier_synchronize(void) {}
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> -{
> -	return ERR_PTR(-EINVAL);
> -}
> -
> -static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -					   struct arm_smmu_ctx_desc *cd) {}
>  #endif /* CONFIG_ARM_SMMU_V3_SVA */
>  #endif /* _ARM_SMMU_V3_H */
> -- 
> 2.17.1
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-09-21 16:07     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:07 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
> start using them for arm-smmu-v3-sva implementation.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>  .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>  4 files changed, 73 insertions(+), 96 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> index 537b7c784d40..b87829796596 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>   * descriptor is using it, try to replace it.
>   */
>  static struct arm_smmu_ctx_desc *
> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +		    struct xarray *xa, u16 asid, u32 asid_bits)

xa and asid_bits could be stored in some arch-specific section of the
iommu_pasid_table struct. Other table drivers wouldn't need those
arguments.

More a comment for the parent series: it may be clearer to give a
different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
Reading this patch I'm a little confused by what belongs in the IOMMU
driver and what is done by this library. (I also keep reading 'tbl' as
'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
space)

>  {
>  	int ret;
>  	u32 new_asid;
>  	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_device *smmu;
> -	struct arm_smmu_domain *smmu_domain;
> -	struct iommu_pasid_table *tbl;
>  
> -	cd = xa_load(&arm_smmu_asid_xa, asid);
> +	cd = xa_load(xa, asid);
>  	if (!cd)
>  		return NULL;
>  
> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  		return cd;
>  	}
>  
> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
> -	smmu = smmu_domain->smmu;
> -	tbl = smmu_domain->tbl;
> -
> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
> +		       GFP_KERNEL);
>  	if (ret)
>  		return ERR_PTR(-ENOSPC);
>  	/*
> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>  	 * be some overlap between use of both ASIDs, until we invalidate the
>  	 * TLB.
>  	 */
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>  	if (ret)
>  		return ERR_PTR(-ENOSYS);
>  
>  	/* Invalidate TLB entries previously associated with that context */
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>  
> -	xa_erase(&arm_smmu_asid_xa, asid);
> +	xa_erase(xa, asid);
>  	return NULL;
>  }
>  
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> +static struct iommu_psdtable_mmu_notifier *
> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
> +			 struct xarray *xa, u32 asid_bits)
>  {
>  	u16 asid;
>  	int err = 0;
>  	u64 tcr, par, reg;
>  	struct arm_smmu_ctx_desc *cd;
>  	struct arm_smmu_ctx_desc *ret = NULL;
> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>  
>  	asid = arm64_mm_context_get(mm);
>  	if (!asid)
>  		return ERR_PTR(-ESRCH);
>  
> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
> +	if (!pst_mn) {
> +		err = -ENOMEM;
> +		goto out_put_context;
> +	}
> +
>  	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>  	if (!cd) {
>  		err = -ENOMEM;
> -		goto out_put_context;
> +		goto out_free_mn;
>  	}
>  
>  	refcount_set(&cd->refs, 1);
>  
> -	mutex_lock(&arm_smmu_asid_lock);
> -	ret = arm_smmu_share_asid(mm, asid);
> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>  	if (ret) {
> -		mutex_unlock(&arm_smmu_asid_lock);
>  		goto out_free_cd;
>  	}
>  
> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
> -	mutex_unlock(&arm_smmu_asid_lock);
> -
> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>  	if (err)
>  		goto out_free_asid;
>  
> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>  	cd->asid = asid;
>  	cd->mm = mm;
>  
> -	return cd;
> +	pst_mn->vendor.cd = cd;
> +	return pst_mn;
>  
>  out_free_asid:
> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
> +	arm_smmu_free_asid(xa, cd);
>  out_free_cd:
>  	kfree(cd);
> +out_free_mn:
> +	kfree(pst_mn);
>  out_put_context:
>  	arm64_mm_context_put(mm);
>  	return err < 0 ? ERR_PTR(err) : ret;
>  }
>  
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd)
> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> +				    struct xarray *xa, void *cookie)

Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
alloc_shared() returns?

>  {
> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
> +	struct arm_smmu_ctx_desc *cd = cookie;
> +
> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>  		/* Unpin ASID */
>  		arm64_mm_context_put(cd->mm);
>  		kfree(cd);
> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>  }
>  
>  struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
> -	.alloc	 = arm_smmu_alloc_cd_tables,
> -	.free	 = arm_smmu_free_cd_tables,
> -	.prepare = arm_smmu_prepare_cd,
> -	.write	 = arm_smmu_write_ctx_desc,
> -	.free_asid = arm_smmu_free_asid,
> +	.alloc		= arm_smmu_alloc_cd_tables,
> +	.free		= arm_smmu_free_cd_tables,
> +	.prepare	= arm_smmu_prepare_cd,
> +	.write		= arm_smmu_write_ctx_desc,
> +	.free_asid	= arm_smmu_free_asid,
> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
> +	.free_shared	= arm_smmu_free_shared_cd,
>  };
>  
>  struct iommu_pasid_table *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> index da35d4cc0c1e..ef28d0c409da 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> @@ -13,23 +13,12 @@
>  #include "../../io-pgtable-arm.h"
>  #include "../../iommu-pasid-table.h"
>  
> -struct arm_smmu_mmu_notifier {
> -	struct mmu_notifier		mn;
> -	struct arm_smmu_ctx_desc	*cd;
> -	bool				cleared;
> -	refcount_t			refs;
> -	struct list_head		list;
> -	struct arm_smmu_domain		*domain;
> -};
> -
> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
> -
>  struct arm_smmu_bond {
> -	struct iommu_sva		sva;
> -	struct mm_struct		*mm;
> -	struct arm_smmu_mmu_notifier	*smmu_mn;
> -	struct list_head		list;
> -	refcount_t			refs;
> +	struct iommu_sva			sva;
> +	struct mm_struct			*mm;
> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
> +	struct list_head			list;
> +	refcount_t				refs;
>  };
>  
>  #define sva_to_bond(handle) \
> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>  					 struct mm_struct *mm,
>  					 unsigned long start, unsigned long end)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	size_t size = end - start + 1;
>  
>  	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>  					    PAGE_SIZE, false, smmu_domain);
>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>  }
>  
>  static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  {
> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	mutex_lock(&sva_lock);
> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  	 */
>  	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);

Another comment for the parent series: I'd prefer making this a
"iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
driver and library. Because that won't work if the SMMU driver is a module
or disabled - build of virtio-iommu will probably fail since quiet_cd will
be undefined. We could make the library built-in and move quiet_cd there,
but an explicit library call seems cleaner.

>  
> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);

We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
should only be called from the library. But with the previous comment,
this invalidation would move to the library.

>  	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>  
>  	smmu_mn->cleared = true;
> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>  
>  static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>  {
> -	kfree(mn_to_smmu(mn));
> +	kfree(mn_to_pstiommu(mn));
>  }
>  
>  static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>  };
>  
>  /* Allocate or get existing MMU notifier for this {domain, mm} pair */
> -static struct arm_smmu_mmu_notifier *
> +static struct iommu_psdtable_mmu_notifier *
>  arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>  			  struct mm_struct *mm)
>  {
>  	int ret;
> -	struct arm_smmu_ctx_desc *cd;
> -	struct arm_smmu_mmu_notifier *smmu_mn;
> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>  		if (smmu_mn->mn.mm == mm) {
>  			refcount_inc(&smmu_mn->refs);
>  			return smmu_mn;
>  		}
>  	}
>  
> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
> -	if (IS_ERR(cd))
> -		return ERR_CAST(cd);
> -
> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
> -	if (!smmu_mn) {
> -		ret = -ENOMEM;
> -		goto err_free_cd;
> -	}
> +	mutex_lock(&arm_smmu_asid_lock);
> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
> +					      smmu->asid_bits);
> +	mutex_unlock(&arm_smmu_asid_lock);
> +	if (IS_ERR(smmu_mn))
> +		return ERR_CAST(smmu_mn);
>  
>  	refcount_set(&smmu_mn->refs, 1);
> -	smmu_mn->cd = cd;
> -	smmu_mn->domain = smmu_domain;
> +	smmu_mn->cookie = smmu_domain;
>  	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>  
>  	ret = mmu_notifier_register(&smmu_mn->mn, mm);
> -	if (ret) {
> -		kfree(smmu_mn);
> +	if (ret)
>  		goto err_free_cd;
> -	}
>  
> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   smmu_mn->vendor.cd);

Pass smmu_mn here, and let the library code get the cd (to allow for other
pasid table implementations)

>  	if (ret)
>  		goto err_put_notifier;
>  
> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);

I'd keep the mmu_notifiers list in domain if the library doesn't use it
for anything.

That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
could move to the library, since the virtio-iommu version seems to be the
same. They probably belong in iommu-sva-lib but we can revisit that when
there are more users.

Thanks,
Jean

>  	return smmu_mn;
>  
>  err_put_notifier:
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
>  err_free_cd:
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, smmu_mn->vendor.cd);
>  	return ERR_PTR(ret);
>  }
>  
> -static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
> +static void
> +arm_smmu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *smmu_mn)
>  {
>  	struct mm_struct *mm = smmu_mn->mn.mm;
> -	struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>  	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>  
>  	if (!refcount_dec_and_test(&smmu_mn->refs))
> @@ -164,7 +151,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
>  
>  	/* Frees smmu_mn */
>  	mmu_notifier_put(&smmu_mn->mn);
> -	arm_smmu_free_shared_cd(tbl, cd);
> +	iommu_psdtable_free_shared(tbl, &arm_smmu_asid_xa, cd);
>  }
>  
>  static struct iommu_sva *
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index bbc4dc75de72..e55567b4d2f4 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1987,7 +1987,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>  	mutex_init(&smmu_domain->init_mutex);
>  	INIT_LIST_HEAD(&smmu_domain->devices);
>  	spin_lock_init(&smmu_domain->devices_lock);
> -	INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
>  
>  	return &smmu_domain->domain;
>  }
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index 13ff024ab0d5..8a689b4316ac 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -737,8 +737,6 @@ struct arm_smmu_domain {
>  
>  	struct list_head		devices;
>  	spinlock_t			devices_lock;
> -
> -	struct list_head		mmu_notifiers;
>  };
>  
>  static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
> @@ -773,10 +771,6 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
>  void arm_smmu_sva_unbind(struct iommu_sva *handle);
>  u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
>  void arm_smmu_sva_notifier_synchronize(void);
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm);
> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -			     struct arm_smmu_ctx_desc *cd);
>  #else /* CONFIG_ARM_SMMU_V3_SVA */
>  static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
>  {
> @@ -832,13 +826,5 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
>  }
>  
>  static inline void arm_smmu_sva_notifier_synchronize(void) {}
> -struct arm_smmu_ctx_desc *
> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
> -{
> -	return ERR_PTR(-EINVAL);
> -}
> -
> -static inline void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
> -					   struct arm_smmu_ctx_desc *cd) {}
>  #endif /* CONFIG_ARM_SMMU_V3_SVA */
>  #endif /* _ARM_SMMU_V3_H */
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 16:09     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:09 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:45PM +0530, Vivek Gautam wrote:
> SVA bind and unbind implementations will allow to prepare translation
> context with CPU page tables that can be programmed into host iommu
> hardware to realize shared address space utilization between the CPU
> and virtualized devices using virtio-iommu.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |   2 +
>  2 files changed, 199 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 250c137a211b..08f1294baeab 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -14,6 +14,9 @@
>  #include <linux/interval_tree.h>
>  #include <linux/iommu.h>
>  #include <linux/io-pgtable.h>
> +#include <linux/mm.h>
> +#include <linux/mmu_context.h>
> +#include <linux/mmu_notifier.h>
>  #include <linux/module.h>
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
> @@ -28,6 +31,7 @@
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
>  #include "iommu-sva-lib.h"
> +#include "io-pgtable-arm.h"

Is this used here?

>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
>  static DEFINE_MUTEX(sva_lock);
>  static DEFINE_MUTEX(iopf_lock);
> +static DEFINE_MUTEX(viommu_asid_lock);
>  
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
> @@ -88,10 +93,22 @@ struct viommu_mapping {
>  struct viommu_mm {
>  	int				pasid;
>  	u64				archid;
> +	struct viommu_sva_bond		*bond;
>  	struct io_pgtable_ops		*ops;
>  	struct viommu_domain		*domain;
>  };
>  
> +struct viommu_sva_bond {
> +	struct iommu_sva		sva;
> +	struct mm_struct		*mm;
> +	struct iommu_psdtable_mmu_notifier	*viommu_mn;
> +	struct list_head		list;
> +	refcount_t			refs;
> +};
> +
> +#define sva_to_viommu_bond(handle) \
> +	container_of(handle, struct viommu_sva_bond, sva)
> +
>  struct viommu_domain {
>  	struct iommu_domain		domain;
>  	struct viommu_dev		*viommu;
> @@ -136,6 +153,7 @@ struct viommu_endpoint {
>  	bool				pri_supported;
>  	bool				sva_enabled;
>  	bool				iopf_enabled;
> +	struct list_head		bonds;
>  };
>  
>  struct viommu_ep_entry {
> @@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  
> +		mutex_lock(&viommu_asid_lock);
>  		/* Prepare PASID tables info to allocate a new table */
>  		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		ret = iommu_psdtable_alloc(tbl, pst_cfg);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
> @@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  			if (ret)
>  				goto err_free_ops;
>  		}
> +		mutex_unlock(&viommu_asid_lock);
>  	} else {
>  		/* TODO: otherwise, check for compatibility with vdev. */
>  		return -ENOSYS;
> @@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  err_free_psdtable:
>  	iommu_psdtable_free(tbl, &tbl->cfg);
>  
> +err_out_unlock:
> +	mutex_unlock(&viommu_asid_lock);
>  	return ret;
>  }
>  
> @@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
>  	vdev->dev = dev;
>  	vdev->viommu = viommu;
>  	INIT_LIST_HEAD(&vdev->resv_regions);
> +	INIT_LIST_HEAD(&vdev->bonds);
>  	dev_iommu_priv_set(dev, vdev);
>  
>  	if (viommu->probe_size) {
> @@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +
> +	return bond->mm->pasid;
> +}
> +
> +static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
> +{
> +	kfree(mn_to_pstiommu(mn));
> +}
> +
> +static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
> +	.free_notifier		= viommu_mmu_notifier_free,

.invalidate_range and .release will be needed as well, to keep up to date
with changes to the address space

> +};
> +
> +/* Allocate or get existing MMU notifier for this {domain, mm} pair */
> +static struct iommu_psdtable_mmu_notifier *
> +viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
> +			u32 asid_bits)
> +{
> +	int ret;
> +	struct iommu_psdtable_mmu_notifier *viommu_mn;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +
> +	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
> +		if (viommu_mn->mn.mm == mm) {
> +			refcount_inc(&viommu_mn->refs);
> +			return viommu_mn;
> +		}
> +	}
> +
> +	mutex_lock(&viommu_asid_lock);
> +	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
> +						asid_bits);
> +	mutex_unlock(&viommu_asid_lock);
> +	if (IS_ERR(viommu_mn))
> +		return ERR_CAST(viommu_mn);
> +
> +	refcount_set(&viommu_mn->refs, 1);
> +	viommu_mn->cookie = vdomain;
> +	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
> +
> +	ret = mmu_notifier_register(&viommu_mn->mn, mm);
> +	if (ret)
> +		goto err_free_cd;
> +
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   viommu_mn->vendor.cd);
> +	if (ret)
> +		goto err_put_notifier;
> +
> +	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
> +	return viommu_mn;
> +
> +err_put_notifier:
> +	/* Frees viommu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +err_free_cd:
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +	return ERR_PTR(ret);
> +}
> +
> +static void
> +viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
> +{
> +	struct mm_struct *mm = viommu_mn->mn.mm;
> +	struct viommu_domain *vdomain = viommu_mn->cookie;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +	u16 asid = viommu_mn->vendor.cd->asid;
> +
> +	if (!refcount_dec_and_test(&viommu_mn->refs))
> +		return;
> +
> +	list_del(&viommu_mn->list);
> +	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
> +
> +	/*
> +	 * If we went through clear(), we've already invalidated, and no
> +	 * new TLB entry can have been formed.
> +	 */
> +	if (!viommu_mn->cleared)
> +		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
> +
> +	/* Frees smmu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +}
> +
> +static struct iommu_sva *
> +__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
> +{
> +	int ret;
> +	struct viommu_sva_bond *bond;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
> +
> +	if (!vdev || !vdev->sva_enabled)
> +		return ERR_PTR(-ENODEV);
> +
> +	/* If bind() was already called for this {dev, mm} pair, reuse it. */
> +	list_for_each_entry(bond, &vdev->bonds, list) {
> +		if (bond->mm == mm) {
> +			refcount_inc(&bond->refs);
> +			return &bond->sva;
> +		}
> +	}
> +
> +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> +	if (!bond)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* Allocate a PASID for this mm if necessary */
> +	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
> +	if (ret)
> +		goto err_free_bond;
> +
> +	bond->mm = mm;
> +	bond->sva.dev = dev;
> +	refcount_set(&bond->refs, 1);
> +
> +	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
> +	if (IS_ERR(bond->viommu_mn)) {
> +		ret = PTR_ERR(bond->viommu_mn);
> +		goto err_free_pasid;
> +	}
> +
> +	list_add(&bond->list, &vdev->bonds);
> +	return &bond->sva;
> +
> +err_free_pasid:
> +	iommu_sva_free_pasid(mm);
> +err_free_bond:
> +	kfree(bond);
> +	return ERR_PTR(ret);
> +}
> +
> +/* closely follows arm_smmu_sva_bind() */
> +static struct iommu_sva *viommu_sva_bind(struct device *dev,
> +					 struct mm_struct *mm, void *drvdata)
> +{
> +	struct iommu_sva *handle;
> +
> +	mutex_lock(&sva_lock);
> +	handle = __viommu_sva_bind(dev, mm);
> +	mutex_unlock(&sva_lock);
> +	return handle;
> +}
> +
> +void viommu_sva_unbind(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
> +
> +	if (vdev->pri_supported)
> +		iopf_queue_flush_dev(handle->dev);
> +
> +	mutex_lock(&sva_lock);
> +	if (refcount_dec_and_test(&bond->refs)) {
> +		list_del(&bond->list);
> +		viommu_mmu_notifier_put(bond->viommu_mn);
> +		iommu_sva_free_pasid(bond->mm);
> +		kfree(bond);
> +	}
> +	mutex_unlock(&sva_lock);
> +}
> +
>  static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
>  {
>  	/* TODO: support Stall model later */
> @@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
>  	.dev_feat_enabled	= viommu_dev_feature_enabled,
>  	.dev_enable_feat	= viommu_dev_enable_feature,
>  	.dev_disable_feat	= viommu_dev_disable_feature,
> +	.sva_bind		= viommu_sva_bind,
> +	.sva_unbind		= viommu_sva_unbind,
> +	.sva_get_pasid		= viommu_sva_get_pasid,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 88a3db493108..c12d9b6a7243 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
>  #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
>  #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
>  
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7

Probably not the right place for this change

Thanks,
Jean

>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
>  #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
@ 2021-09-21 16:09     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:09 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:45PM +0530, Vivek Gautam wrote:
> SVA bind and unbind implementations will allow to prepare translation
> context with CPU page tables that can be programmed into host iommu
> hardware to realize shared address space utilization between the CPU
> and virtualized devices using virtio-iommu.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |   2 +
>  2 files changed, 199 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 250c137a211b..08f1294baeab 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -14,6 +14,9 @@
>  #include <linux/interval_tree.h>
>  #include <linux/iommu.h>
>  #include <linux/io-pgtable.h>
> +#include <linux/mm.h>
> +#include <linux/mmu_context.h>
> +#include <linux/mmu_notifier.h>
>  #include <linux/module.h>
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
> @@ -28,6 +31,7 @@
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
>  #include "iommu-sva-lib.h"
> +#include "io-pgtable-arm.h"

Is this used here?

>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
>  static DEFINE_MUTEX(sva_lock);
>  static DEFINE_MUTEX(iopf_lock);
> +static DEFINE_MUTEX(viommu_asid_lock);
>  
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
> @@ -88,10 +93,22 @@ struct viommu_mapping {
>  struct viommu_mm {
>  	int				pasid;
>  	u64				archid;
> +	struct viommu_sva_bond		*bond;
>  	struct io_pgtable_ops		*ops;
>  	struct viommu_domain		*domain;
>  };
>  
> +struct viommu_sva_bond {
> +	struct iommu_sva		sva;
> +	struct mm_struct		*mm;
> +	struct iommu_psdtable_mmu_notifier	*viommu_mn;
> +	struct list_head		list;
> +	refcount_t			refs;
> +};
> +
> +#define sva_to_viommu_bond(handle) \
> +	container_of(handle, struct viommu_sva_bond, sva)
> +
>  struct viommu_domain {
>  	struct iommu_domain		domain;
>  	struct viommu_dev		*viommu;
> @@ -136,6 +153,7 @@ struct viommu_endpoint {
>  	bool				pri_supported;
>  	bool				sva_enabled;
>  	bool				iopf_enabled;
> +	struct list_head		bonds;
>  };
>  
>  struct viommu_ep_entry {
> @@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  
> +		mutex_lock(&viommu_asid_lock);
>  		/* Prepare PASID tables info to allocate a new table */
>  		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		ret = iommu_psdtable_alloc(tbl, pst_cfg);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
> @@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  			if (ret)
>  				goto err_free_ops;
>  		}
> +		mutex_unlock(&viommu_asid_lock);
>  	} else {
>  		/* TODO: otherwise, check for compatibility with vdev. */
>  		return -ENOSYS;
> @@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  err_free_psdtable:
>  	iommu_psdtable_free(tbl, &tbl->cfg);
>  
> +err_out_unlock:
> +	mutex_unlock(&viommu_asid_lock);
>  	return ret;
>  }
>  
> @@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
>  	vdev->dev = dev;
>  	vdev->viommu = viommu;
>  	INIT_LIST_HEAD(&vdev->resv_regions);
> +	INIT_LIST_HEAD(&vdev->bonds);
>  	dev_iommu_priv_set(dev, vdev);
>  
>  	if (viommu->probe_size) {
> @@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +
> +	return bond->mm->pasid;
> +}
> +
> +static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
> +{
> +	kfree(mn_to_pstiommu(mn));
> +}
> +
> +static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
> +	.free_notifier		= viommu_mmu_notifier_free,

.invalidate_range and .release will be needed as well, to keep up to date
with changes to the address space

> +};
> +
> +/* Allocate or get existing MMU notifier for this {domain, mm} pair */
> +static struct iommu_psdtable_mmu_notifier *
> +viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
> +			u32 asid_bits)
> +{
> +	int ret;
> +	struct iommu_psdtable_mmu_notifier *viommu_mn;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +
> +	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
> +		if (viommu_mn->mn.mm == mm) {
> +			refcount_inc(&viommu_mn->refs);
> +			return viommu_mn;
> +		}
> +	}
> +
> +	mutex_lock(&viommu_asid_lock);
> +	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
> +						asid_bits);
> +	mutex_unlock(&viommu_asid_lock);
> +	if (IS_ERR(viommu_mn))
> +		return ERR_CAST(viommu_mn);
> +
> +	refcount_set(&viommu_mn->refs, 1);
> +	viommu_mn->cookie = vdomain;
> +	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
> +
> +	ret = mmu_notifier_register(&viommu_mn->mn, mm);
> +	if (ret)
> +		goto err_free_cd;
> +
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   viommu_mn->vendor.cd);
> +	if (ret)
> +		goto err_put_notifier;
> +
> +	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
> +	return viommu_mn;
> +
> +err_put_notifier:
> +	/* Frees viommu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +err_free_cd:
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +	return ERR_PTR(ret);
> +}
> +
> +static void
> +viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
> +{
> +	struct mm_struct *mm = viommu_mn->mn.mm;
> +	struct viommu_domain *vdomain = viommu_mn->cookie;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +	u16 asid = viommu_mn->vendor.cd->asid;
> +
> +	if (!refcount_dec_and_test(&viommu_mn->refs))
> +		return;
> +
> +	list_del(&viommu_mn->list);
> +	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
> +
> +	/*
> +	 * If we went through clear(), we've already invalidated, and no
> +	 * new TLB entry can have been formed.
> +	 */
> +	if (!viommu_mn->cleared)
> +		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
> +
> +	/* Frees smmu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +}
> +
> +static struct iommu_sva *
> +__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
> +{
> +	int ret;
> +	struct viommu_sva_bond *bond;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
> +
> +	if (!vdev || !vdev->sva_enabled)
> +		return ERR_PTR(-ENODEV);
> +
> +	/* If bind() was already called for this {dev, mm} pair, reuse it. */
> +	list_for_each_entry(bond, &vdev->bonds, list) {
> +		if (bond->mm == mm) {
> +			refcount_inc(&bond->refs);
> +			return &bond->sva;
> +		}
> +	}
> +
> +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> +	if (!bond)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* Allocate a PASID for this mm if necessary */
> +	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
> +	if (ret)
> +		goto err_free_bond;
> +
> +	bond->mm = mm;
> +	bond->sva.dev = dev;
> +	refcount_set(&bond->refs, 1);
> +
> +	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
> +	if (IS_ERR(bond->viommu_mn)) {
> +		ret = PTR_ERR(bond->viommu_mn);
> +		goto err_free_pasid;
> +	}
> +
> +	list_add(&bond->list, &vdev->bonds);
> +	return &bond->sva;
> +
> +err_free_pasid:
> +	iommu_sva_free_pasid(mm);
> +err_free_bond:
> +	kfree(bond);
> +	return ERR_PTR(ret);
> +}
> +
> +/* closely follows arm_smmu_sva_bind() */
> +static struct iommu_sva *viommu_sva_bind(struct device *dev,
> +					 struct mm_struct *mm, void *drvdata)
> +{
> +	struct iommu_sva *handle;
> +
> +	mutex_lock(&sva_lock);
> +	handle = __viommu_sva_bind(dev, mm);
> +	mutex_unlock(&sva_lock);
> +	return handle;
> +}
> +
> +void viommu_sva_unbind(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
> +
> +	if (vdev->pri_supported)
> +		iopf_queue_flush_dev(handle->dev);
> +
> +	mutex_lock(&sva_lock);
> +	if (refcount_dec_and_test(&bond->refs)) {
> +		list_del(&bond->list);
> +		viommu_mmu_notifier_put(bond->viommu_mn);
> +		iommu_sva_free_pasid(bond->mm);
> +		kfree(bond);
> +	}
> +	mutex_unlock(&sva_lock);
> +}
> +
>  static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
>  {
>  	/* TODO: support Stall model later */
> @@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
>  	.dev_feat_enabled	= viommu_dev_feature_enabled,
>  	.dev_enable_feat	= viommu_dev_enable_feature,
>  	.dev_disable_feat	= viommu_dev_disable_feature,
> +	.sva_bind		= viommu_sva_bind,
> +	.sva_unbind		= viommu_sva_unbind,
> +	.sva_get_pasid		= viommu_sva_get_pasid,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 88a3db493108..c12d9b6a7243 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
>  #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
>  #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
>  
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7

Probably not the right place for this change

Thanks,
Jean

>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
>  #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
> -- 
> 2.17.1
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
@ 2021-09-21 16:09     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:09 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:45PM +0530, Vivek Gautam wrote:
> SVA bind and unbind implementations will allow to prepare translation
> context with CPU page tables that can be programmed into host iommu
> hardware to realize shared address space utilization between the CPU
> and virtualized devices using virtio-iommu.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |   2 +
>  2 files changed, 199 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 250c137a211b..08f1294baeab 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -14,6 +14,9 @@
>  #include <linux/interval_tree.h>
>  #include <linux/iommu.h>
>  #include <linux/io-pgtable.h>
> +#include <linux/mm.h>
> +#include <linux/mmu_context.h>
> +#include <linux/mmu_notifier.h>
>  #include <linux/module.h>
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
> @@ -28,6 +31,7 @@
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
>  #include "iommu-sva-lib.h"
> +#include "io-pgtable-arm.h"

Is this used here?

>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
>  static DEFINE_MUTEX(sva_lock);
>  static DEFINE_MUTEX(iopf_lock);
> +static DEFINE_MUTEX(viommu_asid_lock);
>  
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
> @@ -88,10 +93,22 @@ struct viommu_mapping {
>  struct viommu_mm {
>  	int				pasid;
>  	u64				archid;
> +	struct viommu_sva_bond		*bond;
>  	struct io_pgtable_ops		*ops;
>  	struct viommu_domain		*domain;
>  };
>  
> +struct viommu_sva_bond {
> +	struct iommu_sva		sva;
> +	struct mm_struct		*mm;
> +	struct iommu_psdtable_mmu_notifier	*viommu_mn;
> +	struct list_head		list;
> +	refcount_t			refs;
> +};
> +
> +#define sva_to_viommu_bond(handle) \
> +	container_of(handle, struct viommu_sva_bond, sva)
> +
>  struct viommu_domain {
>  	struct iommu_domain		domain;
>  	struct viommu_dev		*viommu;
> @@ -136,6 +153,7 @@ struct viommu_endpoint {
>  	bool				pri_supported;
>  	bool				sva_enabled;
>  	bool				iopf_enabled;
> +	struct list_head		bonds;
>  };
>  
>  struct viommu_ep_entry {
> @@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  
> +		mutex_lock(&viommu_asid_lock);
>  		/* Prepare PASID tables info to allocate a new table */
>  		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		ret = iommu_psdtable_alloc(tbl, pst_cfg);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
> @@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  			if (ret)
>  				goto err_free_ops;
>  		}
> +		mutex_unlock(&viommu_asid_lock);
>  	} else {
>  		/* TODO: otherwise, check for compatibility with vdev. */
>  		return -ENOSYS;
> @@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  err_free_psdtable:
>  	iommu_psdtable_free(tbl, &tbl->cfg);
>  
> +err_out_unlock:
> +	mutex_unlock(&viommu_asid_lock);
>  	return ret;
>  }
>  
> @@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
>  	vdev->dev = dev;
>  	vdev->viommu = viommu;
>  	INIT_LIST_HEAD(&vdev->resv_regions);
> +	INIT_LIST_HEAD(&vdev->bonds);
>  	dev_iommu_priv_set(dev, vdev);
>  
>  	if (viommu->probe_size) {
> @@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +
> +	return bond->mm->pasid;
> +}
> +
> +static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
> +{
> +	kfree(mn_to_pstiommu(mn));
> +}
> +
> +static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
> +	.free_notifier		= viommu_mmu_notifier_free,

.invalidate_range and .release will be needed as well, to keep up to date
with changes to the address space

> +};
> +
> +/* Allocate or get existing MMU notifier for this {domain, mm} pair */
> +static struct iommu_psdtable_mmu_notifier *
> +viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
> +			u32 asid_bits)
> +{
> +	int ret;
> +	struct iommu_psdtable_mmu_notifier *viommu_mn;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +
> +	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
> +		if (viommu_mn->mn.mm == mm) {
> +			refcount_inc(&viommu_mn->refs);
> +			return viommu_mn;
> +		}
> +	}
> +
> +	mutex_lock(&viommu_asid_lock);
> +	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
> +						asid_bits);
> +	mutex_unlock(&viommu_asid_lock);
> +	if (IS_ERR(viommu_mn))
> +		return ERR_CAST(viommu_mn);
> +
> +	refcount_set(&viommu_mn->refs, 1);
> +	viommu_mn->cookie = vdomain;
> +	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
> +
> +	ret = mmu_notifier_register(&viommu_mn->mn, mm);
> +	if (ret)
> +		goto err_free_cd;
> +
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   viommu_mn->vendor.cd);
> +	if (ret)
> +		goto err_put_notifier;
> +
> +	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
> +	return viommu_mn;
> +
> +err_put_notifier:
> +	/* Frees viommu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +err_free_cd:
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +	return ERR_PTR(ret);
> +}
> +
> +static void
> +viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
> +{
> +	struct mm_struct *mm = viommu_mn->mn.mm;
> +	struct viommu_domain *vdomain = viommu_mn->cookie;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +	u16 asid = viommu_mn->vendor.cd->asid;
> +
> +	if (!refcount_dec_and_test(&viommu_mn->refs))
> +		return;
> +
> +	list_del(&viommu_mn->list);
> +	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
> +
> +	/*
> +	 * If we went through clear(), we've already invalidated, and no
> +	 * new TLB entry can have been formed.
> +	 */
> +	if (!viommu_mn->cleared)
> +		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
> +
> +	/* Frees smmu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +}
> +
> +static struct iommu_sva *
> +__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
> +{
> +	int ret;
> +	struct viommu_sva_bond *bond;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
> +
> +	if (!vdev || !vdev->sva_enabled)
> +		return ERR_PTR(-ENODEV);
> +
> +	/* If bind() was already called for this {dev, mm} pair, reuse it. */
> +	list_for_each_entry(bond, &vdev->bonds, list) {
> +		if (bond->mm == mm) {
> +			refcount_inc(&bond->refs);
> +			return &bond->sva;
> +		}
> +	}
> +
> +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> +	if (!bond)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* Allocate a PASID for this mm if necessary */
> +	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
> +	if (ret)
> +		goto err_free_bond;
> +
> +	bond->mm = mm;
> +	bond->sva.dev = dev;
> +	refcount_set(&bond->refs, 1);
> +
> +	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
> +	if (IS_ERR(bond->viommu_mn)) {
> +		ret = PTR_ERR(bond->viommu_mn);
> +		goto err_free_pasid;
> +	}
> +
> +	list_add(&bond->list, &vdev->bonds);
> +	return &bond->sva;
> +
> +err_free_pasid:
> +	iommu_sva_free_pasid(mm);
> +err_free_bond:
> +	kfree(bond);
> +	return ERR_PTR(ret);
> +}
> +
> +/* closely follows arm_smmu_sva_bind() */
> +static struct iommu_sva *viommu_sva_bind(struct device *dev,
> +					 struct mm_struct *mm, void *drvdata)
> +{
> +	struct iommu_sva *handle;
> +
> +	mutex_lock(&sva_lock);
> +	handle = __viommu_sva_bind(dev, mm);
> +	mutex_unlock(&sva_lock);
> +	return handle;
> +}
> +
> +void viommu_sva_unbind(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
> +
> +	if (vdev->pri_supported)
> +		iopf_queue_flush_dev(handle->dev);
> +
> +	mutex_lock(&sva_lock);
> +	if (refcount_dec_and_test(&bond->refs)) {
> +		list_del(&bond->list);
> +		viommu_mmu_notifier_put(bond->viommu_mn);
> +		iommu_sva_free_pasid(bond->mm);
> +		kfree(bond);
> +	}
> +	mutex_unlock(&sva_lock);
> +}
> +
>  static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
>  {
>  	/* TODO: support Stall model later */
> @@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
>  	.dev_feat_enabled	= viommu_dev_feature_enabled,
>  	.dev_enable_feat	= viommu_dev_enable_feature,
>  	.dev_disable_feat	= viommu_dev_disable_feature,
> +	.sva_bind		= viommu_sva_bind,
> +	.sva_unbind		= viommu_sva_unbind,
> +	.sva_get_pasid		= viommu_sva_get_pasid,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 88a3db493108..c12d9b6a7243 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
>  #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
>  #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
>  
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7

Probably not the right place for this change

Thanks,
Jean

>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
>  #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
> -- 
> 2.17.1
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls
@ 2021-09-21 16:09     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:09 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:45PM +0530, Vivek Gautam wrote:
> SVA bind and unbind implementations will allow to prepare translation
> context with CPU page tables that can be programmed into host iommu
> hardware to realize shared address space utilization between the CPU
> and virtualized devices using virtio-iommu.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 199 +++++++++++++++++++++++++++++-
>  include/uapi/linux/virtio_iommu.h |   2 +
>  2 files changed, 199 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 250c137a211b..08f1294baeab 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -14,6 +14,9 @@
>  #include <linux/interval_tree.h>
>  #include <linux/iommu.h>
>  #include <linux/io-pgtable.h>
> +#include <linux/mm.h>
> +#include <linux/mmu_context.h>
> +#include <linux/mmu_notifier.h>
>  #include <linux/module.h>
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
> @@ -28,6 +31,7 @@
>  #include <uapi/linux/virtio_iommu.h>
>  #include "iommu-pasid-table.h"
>  #include "iommu-sva-lib.h"
> +#include "io-pgtable-arm.h"

Is this used here?

>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
>  
>  static DEFINE_MUTEX(sva_lock);
>  static DEFINE_MUTEX(iopf_lock);
> +static DEFINE_MUTEX(viommu_asid_lock);
>  
>  struct viommu_dev_pri_work {
>  	struct work_struct		work;
> @@ -88,10 +93,22 @@ struct viommu_mapping {
>  struct viommu_mm {
>  	int				pasid;
>  	u64				archid;
> +	struct viommu_sva_bond		*bond;
>  	struct io_pgtable_ops		*ops;
>  	struct viommu_domain		*domain;
>  };
>  
> +struct viommu_sva_bond {
> +	struct iommu_sva		sva;
> +	struct mm_struct		*mm;
> +	struct iommu_psdtable_mmu_notifier	*viommu_mn;
> +	struct list_head		list;
> +	refcount_t			refs;
> +};
> +
> +#define sva_to_viommu_bond(handle) \
> +	container_of(handle, struct viommu_sva_bond, sva)
> +
>  struct viommu_domain {
>  	struct iommu_domain		domain;
>  	struct viommu_dev		*viommu;
> @@ -136,6 +153,7 @@ struct viommu_endpoint {
>  	bool				pri_supported;
>  	bool				sva_enabled;
>  	bool				iopf_enabled;
> +	struct list_head		bonds;
>  };
>  
>  struct viommu_ep_entry {
> @@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  
> +		mutex_lock(&viommu_asid_lock);
>  		/* Prepare PASID tables info to allocate a new table */
>  		ret = viommu_prepare_pst(vdev, pst_cfg, fmt);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		ret = iommu_psdtable_alloc(tbl, pst_cfg);
>  		if (ret)
> -			return ret;
> +			goto err_out_unlock;
>  
>  		pst_cfg->iommu_dev = viommu->dev->parent;
>  		pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3;
> @@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  			if (ret)
>  				goto err_free_ops;
>  		}
> +		mutex_unlock(&viommu_asid_lock);
>  	} else {
>  		/* TODO: otherwise, check for compatibility with vdev. */
>  		return -ENOSYS;
> @@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev,
>  err_free_psdtable:
>  	iommu_psdtable_free(tbl, &tbl->cfg);
>  
> +err_out_unlock:
> +	mutex_unlock(&viommu_asid_lock);
>  	return ret;
>  }
>  
> @@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
>  	vdev->dev = dev;
>  	vdev->viommu = viommu;
>  	INIT_LIST_HEAD(&vdev->resv_regions);
> +	INIT_LIST_HEAD(&vdev->bonds);
>  	dev_iommu_priv_set(dev, vdev);
>  
>  	if (viommu->probe_size) {
> @@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> +static u32 viommu_sva_get_pasid(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +
> +	return bond->mm->pasid;
> +}
> +
> +static void viommu_mmu_notifier_free(struct mmu_notifier *mn)
> +{
> +	kfree(mn_to_pstiommu(mn));
> +}
> +
> +static struct mmu_notifier_ops viommu_mmu_notifier_ops = {
> +	.free_notifier		= viommu_mmu_notifier_free,

.invalidate_range and .release will be needed as well, to keep up to date
with changes to the address space

> +};
> +
> +/* Allocate or get existing MMU notifier for this {domain, mm} pair */
> +static struct iommu_psdtable_mmu_notifier *
> +viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm,
> +			u32 asid_bits)
> +{
> +	int ret;
> +	struct iommu_psdtable_mmu_notifier *viommu_mn;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +
> +	list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) {
> +		if (viommu_mn->mn.mm == mm) {
> +			refcount_inc(&viommu_mn->refs);
> +			return viommu_mn;
> +		}
> +	}
> +
> +	mutex_lock(&viommu_asid_lock);
> +	viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa,
> +						asid_bits);
> +	mutex_unlock(&viommu_asid_lock);
> +	if (IS_ERR(viommu_mn))
> +		return ERR_CAST(viommu_mn);
> +
> +	refcount_set(&viommu_mn->refs, 1);
> +	viommu_mn->cookie = vdomain;
> +	viommu_mn->mn.ops = &viommu_mmu_notifier_ops;
> +
> +	ret = mmu_notifier_register(&viommu_mn->mn, mm);
> +	if (ret)
> +		goto err_free_cd;
> +
> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
> +				   viommu_mn->vendor.cd);
> +	if (ret)
> +		goto err_put_notifier;
> +
> +	list_add(&viommu_mn->list, &tbl->mmu_notifiers);
> +	return viommu_mn;
> +
> +err_put_notifier:
> +	/* Frees viommu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +err_free_cd:
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +	return ERR_PTR(ret);
> +}
> +
> +static void
> +viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn)
> +{
> +	struct mm_struct *mm = viommu_mn->mn.mm;
> +	struct viommu_domain *vdomain = viommu_mn->cookie;
> +	struct iommu_pasid_table *tbl = vdomain->pasid_tbl;
> +	u16 asid = viommu_mn->vendor.cd->asid;
> +
> +	if (!refcount_dec_and_test(&viommu_mn->refs))
> +		return;
> +
> +	list_del(&viommu_mn->list);
> +	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL);
> +
> +	/*
> +	 * If we went through clear(), we've already invalidated, and no
> +	 * new TLB entry can have been formed.
> +	 */
> +	if (!viommu_mn->cleared)
> +		iommu_psdtable_flush_tlb(tbl, vdomain, asid);
> +
> +	/* Frees smmu_mn */
> +	mmu_notifier_put(&viommu_mn->mn);
> +	iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd);
> +}
> +
> +static struct iommu_sva *
> +__viommu_sva_bind(struct device *dev, struct mm_struct *mm)
> +{
> +	int ret;
> +	struct viommu_sva_bond *bond;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct virtio_iommu_probe_table_format *desc = vdev->pgtf;
> +
> +	if (!vdev || !vdev->sva_enabled)
> +		return ERR_PTR(-ENODEV);
> +
> +	/* If bind() was already called for this {dev, mm} pair, reuse it. */
> +	list_for_each_entry(bond, &vdev->bonds, list) {
> +		if (bond->mm == mm) {
> +			refcount_inc(&bond->refs);
> +			return &bond->sva;
> +		}
> +	}
> +
> +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> +	if (!bond)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* Allocate a PASID for this mm if necessary */
> +	ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1);
> +	if (ret)
> +		goto err_free_bond;
> +
> +	bond->mm = mm;
> +	bond->sva.dev = dev;
> +	refcount_set(&bond->refs, 1);
> +
> +	bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits);
> +	if (IS_ERR(bond->viommu_mn)) {
> +		ret = PTR_ERR(bond->viommu_mn);
> +		goto err_free_pasid;
> +	}
> +
> +	list_add(&bond->list, &vdev->bonds);
> +	return &bond->sva;
> +
> +err_free_pasid:
> +	iommu_sva_free_pasid(mm);
> +err_free_bond:
> +	kfree(bond);
> +	return ERR_PTR(ret);
> +}
> +
> +/* closely follows arm_smmu_sva_bind() */
> +static struct iommu_sva *viommu_sva_bind(struct device *dev,
> +					 struct mm_struct *mm, void *drvdata)
> +{
> +	struct iommu_sva *handle;
> +
> +	mutex_lock(&sva_lock);
> +	handle = __viommu_sva_bind(dev, mm);
> +	mutex_unlock(&sva_lock);
> +	return handle;
> +}
> +
> +void viommu_sva_unbind(struct iommu_sva *handle)
> +{
> +	struct viommu_sva_bond *bond = sva_to_viommu_bond(handle);
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev);
> +
> +	if (vdev->pri_supported)
> +		iopf_queue_flush_dev(handle->dev);
> +
> +	mutex_lock(&sva_lock);
> +	if (refcount_dec_and_test(&bond->refs)) {
> +		list_del(&bond->list);
> +		viommu_mmu_notifier_put(bond->viommu_mn);
> +		iommu_sva_free_pasid(bond->mm);
> +		kfree(bond);
> +	}
> +	mutex_unlock(&sva_lock);
> +}
> +
>  static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev)
>  {
>  	/* TODO: support Stall model later */
> @@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = {
>  	.dev_feat_enabled	= viommu_dev_feature_enabled,
>  	.dev_enable_feat	= viommu_dev_enable_feature,
>  	.dev_disable_feat	= viommu_dev_disable_feature,
> +	.sva_bind		= viommu_sva_bind,
> +	.sva_unbind		= viommu_sva_unbind,
> +	.sva_get_pasid		= viommu_sva_get_pasid,
>  };
>  
>  static int viommu_init_vqs(struct viommu_dev *viommu)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 88a3db493108..c12d9b6a7243 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm {
>  #define VIRTIO_IOMMU_PGTF_ARM_HPD0		(1ULL << 41)
>  #define VIRTIO_IOMMU_PGTF_ARM_EPD1		(1 << 23)
>  
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT		32
> +#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK		0x7

Probably not the right place for this change

Thanks,
Jean

>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT		14
>  #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK		0x3
>  #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT		12
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
  2021-04-23  9:51   ` Vivek Gautam
  (?)
  (?)
@ 2021-09-21 16:16     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
> Once the page faults are handled, the response has to be sent to
> virtio-iommu backend, from where it can be sent to the host to
> prepare the response to a generated io page fault by the device.
> Add a new virt-queue request type to handle this.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index c12d9b6a7243..1b174b98663a 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_PROBE			0x05
>  #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>  #define VIRTIO_IOMMU_T_INVALIDATE		0x07
> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>  	__u8					reserved[3];
>  };
>  
> +struct virtio_iommu_req_page_resp {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;

I don't think we need this field, since the fault report doesn't come with
a domain.

> +	__le32					endpoint;
> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)

To be consistent with the rest of the document this would be
VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

> +	__le32					flags;
> +	__le32					pasid;
> +	__le32					grpid;
> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
> +	__le16					resp_code;
> +	__u8					pasid_valid;

This field isn't needed since there already is
VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

> +	__u8					reserved[9];
> +	struct virtio_iommu_req_tail		tail;
> +};

I'd align the size of the struct to 16 bytes, but I don't think that's
strictly necessary.

Thanks,
Jean

> +
>  struct virtio_iommu_req_attach {
>  	struct virtio_iommu_req_head		head;
>  	__le32					domain;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-09-21 16:16     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
> Once the page faults are handled, the response has to be sent to
> virtio-iommu backend, from where it can be sent to the host to
> prepare the response to a generated io page fault by the device.
> Add a new virt-queue request type to handle this.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index c12d9b6a7243..1b174b98663a 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_PROBE			0x05
>  #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>  #define VIRTIO_IOMMU_T_INVALIDATE		0x07
> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>  	__u8					reserved[3];
>  };
>  
> +struct virtio_iommu_req_page_resp {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;

I don't think we need this field, since the fault report doesn't come with
a domain.

> +	__le32					endpoint;
> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)

To be consistent with the rest of the document this would be
VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

> +	__le32					flags;
> +	__le32					pasid;
> +	__le32					grpid;
> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
> +	__le16					resp_code;
> +	__u8					pasid_valid;

This field isn't needed since there already is
VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

> +	__u8					reserved[9];
> +	struct virtio_iommu_req_tail		tail;
> +};

I'd align the size of the struct to 16 bytes, but I don't think that's
strictly necessary.

Thanks,
Jean

> +
>  struct virtio_iommu_req_attach {
>  	struct virtio_iommu_req_head		head;
>  	__le32					domain;
> -- 
> 2.17.1
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-09-21 16:16     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
> Once the page faults are handled, the response has to be sent to
> virtio-iommu backend, from where it can be sent to the host to
> prepare the response to a generated io page fault by the device.
> Add a new virt-queue request type to handle this.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index c12d9b6a7243..1b174b98663a 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_PROBE			0x05
>  #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>  #define VIRTIO_IOMMU_T_INVALIDATE		0x07
> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>  	__u8					reserved[3];
>  };
>  
> +struct virtio_iommu_req_page_resp {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;

I don't think we need this field, since the fault report doesn't come with
a domain.

> +	__le32					endpoint;
> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)

To be consistent with the rest of the document this would be
VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

> +	__le32					flags;
> +	__le32					pasid;
> +	__le32					grpid;
> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
> +	__le16					resp_code;
> +	__u8					pasid_valid;

This field isn't needed since there already is
VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

> +	__u8					reserved[9];
> +	struct virtio_iommu_req_tail		tail;
> +};

I'd align the size of the struct to 16 bytes, but I don't think that's
strictly necessary.

Thanks,
Jean

> +
>  struct virtio_iommu_req_attach {
>  	struct virtio_iommu_req_head		head;
>  	__le32					domain;
> -- 
> 2.17.1
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-09-21 16:16     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-21 16:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
> Once the page faults are handled, the response has to be sent to
> virtio-iommu backend, from where it can be sent to the host to
> prepare the response to a generated io page fault by the device.
> Add a new virt-queue request type to handle this.
> 
> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> ---
>  include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index c12d9b6a7243..1b174b98663a 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_PROBE			0x05
>  #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>  #define VIRTIO_IOMMU_T_INVALIDATE		0x07
> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>  	__u8					reserved[3];
>  };
>  
> +struct virtio_iommu_req_page_resp {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;

I don't think we need this field, since the fault report doesn't come with
a domain.

> +	__le32					endpoint;
> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)

To be consistent with the rest of the document this would be
VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

> +	__le32					flags;
> +	__le32					pasid;
> +	__le32					grpid;
> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
> +	__le16					resp_code;
> +	__u8					pasid_valid;

This field isn't needed since there already is
VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

> +	__u8					reserved[9];
> +	struct virtio_iommu_req_tail		tail;
> +};

I'd align the size of the struct to 16 bytes, but I don't think that's
strictly necessary.

Thanks,
Jean

> +
>  struct virtio_iommu_req_attach {
>  	struct virtio_iommu_req_head		head;
>  	__le32					domain;
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
  2021-09-21 15:58     ` Jean-Philippe Brucker
  (?)
@ 2021-09-30  4:56       ` Vivek Kumar Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  4:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, Alex Williamson, mst, will.deacon, linux-kernel,
	virtualization, iommu, robin.murphy, linux-arm-kernel

Hi Jean,


On 9/21/21 9:28 PM, Jean-Philippe Brucker wrote:
> Hi Vivek,
> 
> Thanks a lot for your work on this

Thanks a lot for taking a look at it. I hope that most of the changes in 
this series and the nested page table series [1] are independent of the 
presently on-going /dev/iommu proposal, and can be separately reviewed.

Please find my comments inline below.

> 
> On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
>> Add fault information for group-id and necessary flags for page
>> request faults that can be handled by page fault handler in
>> virtio-iommu driver.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: Eric Auger <eric.auger@redhat.com>
>> Cc: Alex Williamson <alex.williamson@redhat.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
>> Cc: Liu Yi L <yi.l.liu@intel.com>
>> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
>> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
>> ---
>>   include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index f8bf927a0689..accc3318ce46 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>>   #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>>   #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>>   
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)
> 
> I don't think this one is necessary here. The NEEDS_PASID flags added by
> commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
> helps Linux keep track of things internally. It does tell the fault
> handler whether to reply with PASID or not, but we don't need that here.
> The virtio-iommu driver knows whether a PASID is required by looking at
> the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
> faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
> declare that the endpoint supports recoverable faults anyway, so "PASID
> required in response" can go through there as well.

Sure, I will remove this flag, and rather read the PCIe cap to find out 
if PASID is required or not. After this series, I will follow up with 
the non-PCIe fault handling.

> 
>> +
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
>> +
>>   struct virtio_iommu_fault {
>>   	__u8					reason;
>>   	__u8					reserved[3];
>>   	__le16					flt_type;
>>   	__u8					reserved2[2];
>> +	/* flags is actually permission flags */
> 
> It's also used for declaring validity of fields.
> VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
> valid, so all the other flags introduced by this patch can go in here.

Sure, will remove pr_evt_flags field, and move all the flags to 'flags'.

> 
>>   	__le32					flags;
>> +	/* flags for PASID and Page request handling info */
>> +	__le32					pr_evt_flags;
>>   	__le32					endpoint;
>>   	__le32					pasid;
>> +	__le32					grpid;
> 
> I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> seems more than enough.

Right, I will update this to 16-bits field. It won't be okay to update 
the iommu uAPI now, right?

> 
> New fields must be appended at the end of the struct, because old drivers
> will expect to find the 'endpoint' field at this offset. You could remove
> 'reserved3' while adding 'grpid', to keep the struct layout.

Sure, will update this.

> 
>>   	__u8					reserved3[4];
>>   	__le64					address;
>>   	__u8					reserved4[8];
> 
> 
> So the base structure, currently in the spec, looks like this:
> 
> 	struct virtio_iommu_fault {
> 		u8   reason;
> 		u8   reserved[3];
> 		le32 flags;
> 		le32 endpoint;
> 		le32 reserved1;
> 		le64 address;
> 	};
> 
> 	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
> 	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
> 	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
> 
> The extended struct could be:
> 
> 	struct virtio_iommu_fault {
> 		u8   reason;
> 		u8   reserved[3];
> 		le32 flags;
> 		le32 endpoint;
> 		le32 pasid;
> 		le64 address;
> 		/* Page request group ID */
> 		le16 group_id;
> 		u8   reserved1[6];
> 		/* For VT-d private data */
> 		le64 private_data[2];
> 	};
> 
> 	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
> 	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
> 	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
> 	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
> 	/* Last fault in group */
> 	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
> 	/* Fault is a recoverable page request and requires a response */
> 	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)
> 
> 	/* address field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
> 	/* pasid field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
> 	/* group_id field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
> 	/* private data field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks Jean for summarizing it here. I will update the patch.

Best regards
Vivek
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30  4:56       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  4:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson

Hi Jean,


On 9/21/21 9:28 PM, Jean-Philippe Brucker wrote:
> Hi Vivek,
> 
> Thanks a lot for your work on this

Thanks a lot for taking a look at it. I hope that most of the changes in 
this series and the nested page table series [1] are independent of the 
presently on-going /dev/iommu proposal, and can be separately reviewed.

Please find my comments inline below.

> 
> On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
>> Add fault information for group-id and necessary flags for page
>> request faults that can be handled by page fault handler in
>> virtio-iommu driver.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: Eric Auger <eric.auger@redhat.com>
>> Cc: Alex Williamson <alex.williamson@redhat.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
>> Cc: Liu Yi L <yi.l.liu@intel.com>
>> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
>> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
>> ---
>>   include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index f8bf927a0689..accc3318ce46 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>>   #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>>   #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>>   
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)
> 
> I don't think this one is necessary here. The NEEDS_PASID flags added by
> commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
> helps Linux keep track of things internally. It does tell the fault
> handler whether to reply with PASID or not, but we don't need that here.
> The virtio-iommu driver knows whether a PASID is required by looking at
> the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
> faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
> declare that the endpoint supports recoverable faults anyway, so "PASID
> required in response" can go through there as well.

Sure, I will remove this flag, and rather read the PCIe cap to find out 
if PASID is required or not. After this series, I will follow up with 
the non-PCIe fault handling.

> 
>> +
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
>> +
>>   struct virtio_iommu_fault {
>>   	__u8					reason;
>>   	__u8					reserved[3];
>>   	__le16					flt_type;
>>   	__u8					reserved2[2];
>> +	/* flags is actually permission flags */
> 
> It's also used for declaring validity of fields.
> VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
> valid, so all the other flags introduced by this patch can go in here.

Sure, will remove pr_evt_flags field, and move all the flags to 'flags'.

> 
>>   	__le32					flags;
>> +	/* flags for PASID and Page request handling info */
>> +	__le32					pr_evt_flags;
>>   	__le32					endpoint;
>>   	__le32					pasid;
>> +	__le32					grpid;
> 
> I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> seems more than enough.

Right, I will update this to 16-bits field. It won't be okay to update 
the iommu uAPI now, right?

> 
> New fields must be appended at the end of the struct, because old drivers
> will expect to find the 'endpoint' field at this offset. You could remove
> 'reserved3' while adding 'grpid', to keep the struct layout.

Sure, will update this.

> 
>>   	__u8					reserved3[4];
>>   	__le64					address;
>>   	__u8					reserved4[8];
> 
> 
> So the base structure, currently in the spec, looks like this:
> 
> 	struct virtio_iommu_fault {
> 		u8   reason;
> 		u8   reserved[3];
> 		le32 flags;
> 		le32 endpoint;
> 		le32 reserved1;
> 		le64 address;
> 	};
> 
> 	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
> 	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
> 	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
> 
> The extended struct could be:
> 
> 	struct virtio_iommu_fault {
> 		u8   reason;
> 		u8   reserved[3];
> 		le32 flags;
> 		le32 endpoint;
> 		le32 pasid;
> 		le64 address;
> 		/* Page request group ID */
> 		le16 group_id;
> 		u8   reserved1[6];
> 		/* For VT-d private data */
> 		le64 private_data[2];
> 	};
> 
> 	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
> 	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
> 	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
> 	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
> 	/* Last fault in group */
> 	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
> 	/* Fault is a recoverable page request and requires a response */
> 	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)
> 
> 	/* address field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
> 	/* pasid field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
> 	/* group_id field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
> 	/* private data field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks Jean for summarizing it here. I will update the patch.

Best regards
Vivek

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30  4:56       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  4:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson

Hi Jean,


On 9/21/21 9:28 PM, Jean-Philippe Brucker wrote:
> Hi Vivek,
> 
> Thanks a lot for your work on this

Thanks a lot for taking a look at it. I hope that most of the changes in 
this series and the nested page table series [1] are independent of the 
presently on-going /dev/iommu proposal, and can be separately reviewed.

Please find my comments inline below.

> 
> On Fri, Apr 23, 2021 at 03:21:37PM +0530, Vivek Gautam wrote:
>> Add fault information for group-id and necessary flags for page
>> request faults that can be handled by page fault handler in
>> virtio-iommu driver.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: Eric Auger <eric.auger@redhat.com>
>> Cc: Alex Williamson <alex.williamson@redhat.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
>> Cc: Liu Yi L <yi.l.liu@intel.com>
>> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
>> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
>> ---
>>   include/uapi/linux/virtio_iommu.h | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index f8bf927a0689..accc3318ce46 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -307,14 +307,27 @@ struct virtio_iommu_req_invalidate {
>>   #define VIRTIO_IOMMU_FAULT_F_DMA_UNRECOV	1
>>   #define VIRTIO_IOMMU_FAULT_F_PAGE_REQ		2
>>   
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID		(1 << 0)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE		(1 << 1)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_PRIV_DATA		(1 << 2)
>> +#define VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID		(1 << 3)
> 
> I don't think this one is necessary here. The NEEDS_PASID flags added by
> commit 970471914c67 ("iommu: Allow page responses without PASID") mainly
> helps Linux keep track of things internally. It does tell the fault
> handler whether to reply with PASID or not, but we don't need that here.
> The virtio-iommu driver knows whether a PASID is required by looking at
> the "PRG Response PASID Required" bit in the PCIe capability. For non-PCIe
> faults (e.g. SMMU stall), I'm guessing we'll need a PROBE property to
> declare that the endpoint supports recoverable faults anyway, so "PASID
> required in response" can go through there as well.

Sure, I will remove this flag, and rather read the PCIe cap to find out 
if PASID is required or not. After this series, I will follow up with 
the non-PCIe fault handling.

> 
>> +
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_PASID_VALID		(1 << 0)
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_ADDR_VALID		(1 << 1)
>> +#define VIRTIO_IOMMU_FAULT_UNREC_F_FETCH_ADDR_VALID	(1 << 2)
>> +
>>   struct virtio_iommu_fault {
>>   	__u8					reason;
>>   	__u8					reserved[3];
>>   	__le16					flt_type;
>>   	__u8					reserved2[2];
>> +	/* flags is actually permission flags */
> 
> It's also used for declaring validity of fields.
> VIRTIO_IOMMU_FAULT_F_ADDRESS already tells whether the address field is
> valid, so all the other flags introduced by this patch can go in here.

Sure, will remove pr_evt_flags field, and move all the flags to 'flags'.

> 
>>   	__le32					flags;
>> +	/* flags for PASID and Page request handling info */
>> +	__le32					pr_evt_flags;
>>   	__le32					endpoint;
>>   	__le32					pasid;
>> +	__le32					grpid;
> 
> I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> seems more than enough.

Right, I will update this to 16-bits field. It won't be okay to update 
the iommu uAPI now, right?

> 
> New fields must be appended at the end of the struct, because old drivers
> will expect to find the 'endpoint' field at this offset. You could remove
> 'reserved3' while adding 'grpid', to keep the struct layout.

Sure, will update this.

> 
>>   	__u8					reserved3[4];
>>   	__le64					address;
>>   	__u8					reserved4[8];
> 
> 
> So the base structure, currently in the spec, looks like this:
> 
> 	struct virtio_iommu_fault {
> 		u8   reason;
> 		u8   reserved[3];
> 		le32 flags;
> 		le32 endpoint;
> 		le32 reserved1;
> 		le64 address;
> 	};
> 
> 	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
> 	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
> 	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
> 
> The extended struct could be:
> 
> 	struct virtio_iommu_fault {
> 		u8   reason;
> 		u8   reserved[3];
> 		le32 flags;
> 		le32 endpoint;
> 		le32 pasid;
> 		le64 address;
> 		/* Page request group ID */
> 		le16 group_id;
> 		u8   reserved1[6];
> 		/* For VT-d private data */
> 		le64 private_data[2];
> 	};
> 
> 	#define VIRTIO_IOMMU_FAULT_F_READ	(1 << 0)
> 	#define VIRTIO_IOMMU_FAULT_F_WRITE	(1 << 1)
> 	#define VIRTIO_IOMMU_FAULT_F_EXEC	(1 << 2)
> 	#define VIRTIO_IOMMU_FAULT_F_PRIVILEGED	(1 << 3)
> 	/* Last fault in group */
> 	#define VIRTIO_IOMMU_FAULT_F_LAST	(1 << 4)
> 	/* Fault is a recoverable page request and requires a response */
> 	#define VIRTIO_IOMMU_FAULT_F_PAGE_REQ	(1 << 5)
> 
> 	/* address field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_ADDRESS	(1 << 8)
> 	/* pasid field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_PASID	(1 << 9)
> 	/* group_id field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_GROUP_ID	(1 << 10)
> 	/* private data field is valid */
> 	#define VIRTIO_IOMMU_FAULT_F_PRIV_DATA	(1 << 11)

Thanks Jean for summarizing it here. I will update the patch.

Best regards
Vivek

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
  2021-09-30  4:56       ` Vivek Kumar Gautam
  (?)
  (?)
@ 2021-09-30  8:49         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-30  8:49 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson

On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
> > I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> > PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> > is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> > seems more than enough.
> 
> Right, I will update this to 16-bits field. It won't be okay to update the
> iommu uAPI now, right?

Since there is no UAPI transport for the fault request/response at the
moment, it should be possible to update it.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30  8:49         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-30  8:49 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: kevin.tian, Alex Williamson, mst, will.deacon, linux-kernel,
	virtualization, iommu, robin.murphy, linux-arm-kernel

On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
> > I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> > PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> > is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> > seems more than enough.
> 
> Right, I will update this to 16-bits field. It won't be okay to update the
> iommu uAPI now, right?

Since there is no UAPI transport for the fault request/response at the
moment, it should be possible to update it.

Thanks,
Jean
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30  8:49         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-30  8:49 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
> > I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> > PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> > is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> > seems more than enough.
> 
> Right, I will update this to 16-bits field. It won't be okay to update the
> iommu uAPI now, right?

Since there is no UAPI transport for the fault request/response at the
moment, it should be possible to update it.

Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30  8:49         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-09-30  8:49 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson

On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
> > I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
> > PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
> > is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
> > seems more than enough.
> 
> Right, I will update this to 16-bits field. It won't be okay to update the
> iommu uAPI now, right?

Since there is no UAPI transport for the fault request/response at the
moment, it should be possible to update it.

Thanks,
Jean

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
  2021-09-21 15:59     ` Jean-Philippe Brucker
  (?)
@ 2021-09-30  9:17       ` Vivek Kumar Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:17 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi



On 9/21/21 9:29 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
>> Keeping a record of list of endpoints that are served by the virtio-iommu
>> device would help in redirecting the requests of page faults to the
>> correct endpoint device to handle such requests.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> index 50039070e2aa..c970f386f031 100644
>> --- a/drivers/iommu/virtio-iommu.c
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -48,6 +48,7 @@ struct viommu_dev {
>>   	spinlock_t			request_lock;
>>   	struct list_head		requests;
>>   	void				*evts;
>> +	struct list_head		endpoints;
> 
> As we're going to search by ID, an xarray or rb_tree would be more
> appropriate than a list

Sure, I will update this with a rb_tree.

> 
>>   
>>   	/* Device configuration */
>>   	struct iommu_domain_geometry	geometry;
>> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>>   	void				*pgtf;
>>   };
>>   
>> +struct viommu_ep_entry {
>> +	u32				eid;
>> +	struct viommu_endpoint		*vdev;
>> +	struct list_head		list;
>> +};
> 
> No need for a separate struct, I think you can just add the list head and
> id into viommu_endpoint.

Yea right. I will update it.

> 
>> +
>>   struct viommu_request {
>>   	struct list_head		list;
>>   	void				*writeback;
>> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>>   	size_t probe_len;
>>   	struct virtio_iommu_req_probe *probe;
>>   	struct virtio_iommu_probe_property *prop;
>> +	struct viommu_ep_entry *ep;
>>   	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>>   	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>>   
>> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>>   		prop = (void *)probe->properties + cur;
>>   		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>>   	}
>> +	if (ret)
>> +		goto out_free;
>> +
>> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
>> +	if (!ep) {
>> +		ret = -ENOMEM;
>> +		goto out_free;
>> +	}
>> +	ep->eid = probe->endpoint;
>> +	ep->vdev = vdev;
>> +
>> +	list_add(&ep->list, &viommu->endpoints);
> 
> This should be in viommu_probe_device() (viommu_probe_endpoint() is only
> called if F_PROBE is negotiated). I think we need a lock for this
> list/xarray

Sure, will fix this, and add the needed locking around.

Thanks & regards
Vivek

> 
> Thanks,
> Jean
> 
>>   
>>   out_free:
>>   	kfree(probe);
>> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>>   	viommu->dev = dev;
>>   	viommu->vdev = vdev;
>>   	INIT_LIST_HEAD(&viommu->requests);
>> +	INIT_LIST_HEAD(&viommu->endpoints);
>>   
>>   	ret = viommu_init_vqs(viommu);
>>   	if (ret)
>> -- 
>> 2.17.1
>>

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-09-30  9:17       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:17 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel



On 9/21/21 9:29 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
>> Keeping a record of list of endpoints that are served by the virtio-iommu
>> device would help in redirecting the requests of page faults to the
>> correct endpoint device to handle such requests.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> index 50039070e2aa..c970f386f031 100644
>> --- a/drivers/iommu/virtio-iommu.c
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -48,6 +48,7 @@ struct viommu_dev {
>>   	spinlock_t			request_lock;
>>   	struct list_head		requests;
>>   	void				*evts;
>> +	struct list_head		endpoints;
> 
> As we're going to search by ID, an xarray or rb_tree would be more
> appropriate than a list

Sure, I will update this with a rb_tree.

> 
>>   
>>   	/* Device configuration */
>>   	struct iommu_domain_geometry	geometry;
>> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>>   	void				*pgtf;
>>   };
>>   
>> +struct viommu_ep_entry {
>> +	u32				eid;
>> +	struct viommu_endpoint		*vdev;
>> +	struct list_head		list;
>> +};
> 
> No need for a separate struct, I think you can just add the list head and
> id into viommu_endpoint.

Yea right. I will update it.

> 
>> +
>>   struct viommu_request {
>>   	struct list_head		list;
>>   	void				*writeback;
>> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>>   	size_t probe_len;
>>   	struct virtio_iommu_req_probe *probe;
>>   	struct virtio_iommu_probe_property *prop;
>> +	struct viommu_ep_entry *ep;
>>   	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>>   	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>>   
>> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>>   		prop = (void *)probe->properties + cur;
>>   		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>>   	}
>> +	if (ret)
>> +		goto out_free;
>> +
>> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
>> +	if (!ep) {
>> +		ret = -ENOMEM;
>> +		goto out_free;
>> +	}
>> +	ep->eid = probe->endpoint;
>> +	ep->vdev = vdev;
>> +
>> +	list_add(&ep->list, &viommu->endpoints);
> 
> This should be in viommu_probe_device() (viommu_probe_endpoint() is only
> called if F_PROBE is negotiated). I think we need a lock for this
> list/xarray

Sure, will fix this, and add the needed locking around.

Thanks & regards
Vivek

> 
> Thanks,
> Jean
> 
>>   
>>   out_free:
>>   	kfree(probe);
>> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>>   	viommu->dev = dev;
>>   	viommu->vdev = vdev;
>>   	INIT_LIST_HEAD(&viommu->requests);
>> +	INIT_LIST_HEAD(&viommu->endpoints);
>>   
>>   	ret = viommu_init_vqs(viommu);
>>   	if (ret)
>> -- 
>> 2.17.1
>>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev
@ 2021-09-30  9:17       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:17 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi



On 9/21/21 9:29 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:38PM +0530, Vivek Gautam wrote:
>> Keeping a record of list of endpoints that are served by the virtio-iommu
>> device would help in redirecting the requests of page faults to the
>> correct endpoint device to handle such requests.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   drivers/iommu/virtio-iommu.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> index 50039070e2aa..c970f386f031 100644
>> --- a/drivers/iommu/virtio-iommu.c
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -48,6 +48,7 @@ struct viommu_dev {
>>   	spinlock_t			request_lock;
>>   	struct list_head		requests;
>>   	void				*evts;
>> +	struct list_head		endpoints;
> 
> As we're going to search by ID, an xarray or rb_tree would be more
> appropriate than a list

Sure, I will update this with a rb_tree.

> 
>>   
>>   	/* Device configuration */
>>   	struct iommu_domain_geometry	geometry;
>> @@ -115,6 +116,12 @@ struct viommu_endpoint {
>>   	void				*pgtf;
>>   };
>>   
>> +struct viommu_ep_entry {
>> +	u32				eid;
>> +	struct viommu_endpoint		*vdev;
>> +	struct list_head		list;
>> +};
> 
> No need for a separate struct, I think you can just add the list head and
> id into viommu_endpoint.

Yea right. I will update it.

> 
>> +
>>   struct viommu_request {
>>   	struct list_head		list;
>>   	void				*writeback;
>> @@ -573,6 +580,7 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>>   	size_t probe_len;
>>   	struct virtio_iommu_req_probe *probe;
>>   	struct virtio_iommu_probe_property *prop;
>> +	struct viommu_ep_entry *ep;
>>   	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>>   	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>>   
>> @@ -640,6 +648,18 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>>   		prop = (void *)probe->properties + cur;
>>   		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>>   	}
>> +	if (ret)
>> +		goto out_free;
>> +
>> +	ep = kzalloc(sizeof(*ep), GFP_KERNEL);
>> +	if (!ep) {
>> +		ret = -ENOMEM;
>> +		goto out_free;
>> +	}
>> +	ep->eid = probe->endpoint;
>> +	ep->vdev = vdev;
>> +
>> +	list_add(&ep->list, &viommu->endpoints);
> 
> This should be in viommu_probe_device() (viommu_probe_endpoint() is only
> called if F_PROBE is negotiated). I think we need a lock for this
> list/xarray

Sure, will fix this, and add the needed locking around.

Thanks & regards
Vivek

> 
> Thanks,
> Jean
> 
>>   
>>   out_free:
>>   	kfree(probe);
>> @@ -1649,6 +1669,7 @@ static int viommu_probe(struct virtio_device *vdev)
>>   	viommu->dev = dev;
>>   	viommu->vdev = vdev;
>>   	INIT_LIST_HEAD(&viommu->requests);
>> +	INIT_LIST_HEAD(&viommu->endpoints);
>>   
>>   	ret = viommu_init_vqs(viommu);
>>   	if (ret)
>> -- 
>> 2.17.1
>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
  2021-09-21 16:16     ` Jean-Philippe Brucker
  (?)
@ 2021-09-30  9:24       ` Vivek Kumar Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:24 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

Hi Jean,


On 9/21/21 9:46 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
>> Once the page faults are handled, the response has to be sent to
>> virtio-iommu backend, from where it can be sent to the host to
>> prepare the response to a generated io page fault by the device.
>> Add a new virt-queue request type to handle this.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index c12d9b6a7243..1b174b98663a 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>>   #define VIRTIO_IOMMU_T_PROBE			0x05
>>   #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>>   #define VIRTIO_IOMMU_T_INVALIDATE		0x07
>> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>>   
>>   /* Status types */
>>   #define VIRTIO_IOMMU_S_OK			0x00
>> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>>   	__u8					reserved[3];
>>   };
>>   
>> +struct virtio_iommu_req_page_resp {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
> 
> I don't think we need this field, since the fault report doesn't come with
> a domain.

But here we are sending the response which would be consumed by the vfio 
ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp" 
request in the virtio/iommu driver, extracting the domain from it, and 
using that to call the respective "page_response" ops from 
"vfio_iommu_ops" in the vfio/core driver.

Is this incorrect way of passing on the page-response back to the host 
kernel?

But I think this will have to be worked out with the /dev/iommu framework.

> 
>> +	__le32					endpoint;
>> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)
> 
> To be consistent with the rest of the document this would be
> VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

Sure, will update this.

> 
>> +	__le32					flags;
>> +	__le32					pasid;
>> +	__le32					grpid;
>> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
>> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
>> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
>> +	__le16					resp_code;
>> +	__u8					pasid_valid;
> 
> This field isn't needed since there already is
> VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

Yes, sure will remove this field.

> 
>> +	__u8					reserved[9];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
> 
> I'd align the size of the struct to 16 bytes, but I don't think that's
> strictly necessary.

Will fix this. Thanks a lot for reviewing.

Best regards
Vivek

> 
> Thanks,
> Jean
> 
>> +
>>   struct virtio_iommu_req_attach {
>>   	struct virtio_iommu_req_head		head;
>>   	__le32					domain;
>> -- 
>> 2.17.1
>>

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-09-30  9:24       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:24 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

Hi Jean,


On 9/21/21 9:46 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
>> Once the page faults are handled, the response has to be sent to
>> virtio-iommu backend, from where it can be sent to the host to
>> prepare the response to a generated io page fault by the device.
>> Add a new virt-queue request type to handle this.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index c12d9b6a7243..1b174b98663a 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>>   #define VIRTIO_IOMMU_T_PROBE			0x05
>>   #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>>   #define VIRTIO_IOMMU_T_INVALIDATE		0x07
>> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>>   
>>   /* Status types */
>>   #define VIRTIO_IOMMU_S_OK			0x00
>> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>>   	__u8					reserved[3];
>>   };
>>   
>> +struct virtio_iommu_req_page_resp {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
> 
> I don't think we need this field, since the fault report doesn't come with
> a domain.

But here we are sending the response which would be consumed by the vfio 
ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp" 
request in the virtio/iommu driver, extracting the domain from it, and 
using that to call the respective "page_response" ops from 
"vfio_iommu_ops" in the vfio/core driver.

Is this incorrect way of passing on the page-response back to the host 
kernel?

But I think this will have to be worked out with the /dev/iommu framework.

> 
>> +	__le32					endpoint;
>> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)
> 
> To be consistent with the rest of the document this would be
> VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

Sure, will update this.

> 
>> +	__le32					flags;
>> +	__le32					pasid;
>> +	__le32					grpid;
>> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
>> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
>> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
>> +	__le16					resp_code;
>> +	__u8					pasid_valid;
> 
> This field isn't needed since there already is
> VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

Yes, sure will remove this field.

> 
>> +	__u8					reserved[9];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
> 
> I'd align the size of the struct to 16 bytes, but I don't think that's
> strictly necessary.

Will fix this. Thanks a lot for reviewing.

Best regards
Vivek

> 
> Thanks,
> Jean
> 
>> +
>>   struct virtio_iommu_req_attach {
>>   	struct virtio_iommu_req_head		head;
>>   	__le32					domain;
>> -- 
>> 2.17.1
>>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-09-30  9:24       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:24 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

Hi Jean,


On 9/21/21 9:46 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:46PM +0530, Vivek Gautam wrote:
>> Once the page faults are handled, the response has to be sent to
>> virtio-iommu backend, from where it can be sent to the host to
>> prepare the response to a generated io page fault by the device.
>> Add a new virt-queue request type to handle this.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   include/uapi/linux/virtio_iommu.h | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index c12d9b6a7243..1b174b98663a 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -48,6 +48,7 @@ struct virtio_iommu_config {
>>   #define VIRTIO_IOMMU_T_PROBE			0x05
>>   #define VIRTIO_IOMMU_T_ATTACH_TABLE		0x06
>>   #define VIRTIO_IOMMU_T_INVALIDATE		0x07
>> +#define VIRTIO_IOMMU_T_PAGE_RESP		0x08
>>   
>>   /* Status types */
>>   #define VIRTIO_IOMMU_S_OK			0x00
>> @@ -70,6 +71,23 @@ struct virtio_iommu_req_tail {
>>   	__u8					reserved[3];
>>   };
>>   
>> +struct virtio_iommu_req_page_resp {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
> 
> I don't think we need this field, since the fault report doesn't come with
> a domain.

But here we are sending the response which would be consumed by the vfio 
ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp" 
request in the virtio/iommu driver, extracting the domain from it, and 
using that to call the respective "page_response" ops from 
"vfio_iommu_ops" in the vfio/core driver.

Is this incorrect way of passing on the page-response back to the host 
kernel?

But I think this will have to be worked out with the /dev/iommu framework.

> 
>> +	__le32					endpoint;
>> +#define VIRTIO_IOMMU_PAGE_RESP_PASID_VALID	(1 << 0)
> 
> To be consistent with the rest of the document this would be
> VIRTIO_IOMMU_PAGE_RESP_F_PASID_VALID

Sure, will update this.

> 
>> +	__le32					flags;
>> +	__le32					pasid;
>> +	__le32					grpid;
>> +#define VIRTIO_IOMMU_PAGE_RESP_SUCCESS		(0x0)
>> +#define VIRTIO_IOMMU_PAGE_RESP_INVALID		(0x1)
>> +#define VIRTIO_IOMMU_PAGE_RESP_FAILURE		(0x2)
>> +	__le16					resp_code;
>> +	__u8					pasid_valid;
> 
> This field isn't needed since there already is
> VIRTIO_IOMMU_PAGE_RESP_PASID_VALID

Yes, sure will remove this field.

> 
>> +	__u8					reserved[9];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
> 
> I'd align the size of the struct to 16 bytes, but I don't think that's
> strictly necessary.

Will fix this. Thanks a lot for reviewing.

Best regards
Vivek

> 
> Thanks,
> Jean
> 
>> +
>>   struct virtio_iommu_req_attach {
>>   	struct virtio_iommu_req_head		head;
>>   	__le32					domain;
>> -- 
>> 2.17.1
>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
  2021-09-21 16:07     ` Jean-Philippe Brucker
  (?)
@ 2021-09-30  9:50       ` Vivek Kumar Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:50 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

Hi Jean,


On 9/21/21 9:37 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
>> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
>> start using them for arm-smmu-v3-sva implementation.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>>   .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>>   4 files changed, 73 insertions(+), 96 deletions(-)
>>
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> index 537b7c784d40..b87829796596 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>>    * descriptor is using it, try to replace it.
>>    */
>>   static struct arm_smmu_ctx_desc *
>> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
>> +		    struct xarray *xa, u16 asid, u32 asid_bits)
> 
> xa and asid_bits could be stored in some arch-specific section of the
> iommu_pasid_table struct. Other table drivers wouldn't need those
> arguments.

Okay, will move them to a separate structure section.

> 
> More a comment for the parent series: it may be clearer to give a
> different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
> Reading this patch I'm a little confused by what belongs in the IOMMU
> driver and what is done by this library. (I also keep reading 'tbl' as
> 'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
> space)

Yea, this may be confusing. I will fix these namings in my next version.

> 
>>   {
>>   	int ret;
>>   	u32 new_asid;
>>   	struct arm_smmu_ctx_desc *cd;
>> -	struct arm_smmu_device *smmu;
>> -	struct arm_smmu_domain *smmu_domain;
>> -	struct iommu_pasid_table *tbl;
>>   
>> -	cd = xa_load(&arm_smmu_asid_xa, asid);
>> +	cd = xa_load(xa, asid);
>>   	if (!cd)
>>   		return NULL;
>>   
>> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>>   		return cd;
>>   	}
>>   
>> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
>> -	smmu = smmu_domain->smmu;
>> -	tbl = smmu_domain->tbl;
>> -
>> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
>> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
>> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
>> +		       GFP_KERNEL);
>>   	if (ret)
>>   		return ERR_PTR(-ENOSPC);
>>   	/*
>> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>>   	 * be some overlap between use of both ASIDs, until we invalidate the
>>   	 * TLB.
>>   	 */
>> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
>> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>>   	if (ret)
>>   		return ERR_PTR(-ENOSYS);
>>   
>>   	/* Invalidate TLB entries previously associated with that context */
>> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
>> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>>   
>> -	xa_erase(&arm_smmu_asid_xa, asid);
>> +	xa_erase(xa, asid);
>>   	return NULL;
>>   }
>>   
>> -struct arm_smmu_ctx_desc *
>> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>> +static struct iommu_psdtable_mmu_notifier *
>> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
>> +			 struct xarray *xa, u32 asid_bits)
>>   {
>>   	u16 asid;
>>   	int err = 0;
>>   	u64 tcr, par, reg;
>>   	struct arm_smmu_ctx_desc *cd;
>>   	struct arm_smmu_ctx_desc *ret = NULL;
>> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>>   
>>   	asid = arm64_mm_context_get(mm);
>>   	if (!asid)
>>   		return ERR_PTR(-ESRCH);
>>   
>> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
>> +	if (!pst_mn) {
>> +		err = -ENOMEM;
>> +		goto out_put_context;
>> +	}
>> +
>>   	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>>   	if (!cd) {
>>   		err = -ENOMEM;
>> -		goto out_put_context;
>> +		goto out_free_mn;
>>   	}
>>   
>>   	refcount_set(&cd->refs, 1);
>>   
>> -	mutex_lock(&arm_smmu_asid_lock);
>> -	ret = arm_smmu_share_asid(mm, asid);
>> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>>   	if (ret) {
>> -		mutex_unlock(&arm_smmu_asid_lock);
>>   		goto out_free_cd;
>>   	}
>>   
>> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
>> -	mutex_unlock(&arm_smmu_asid_lock);
>> -
>> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>>   	if (err)
>>   		goto out_free_asid;
>>   
>> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>>   	cd->asid = asid;
>>   	cd->mm = mm;
>>   
>> -	return cd;
>> +	pst_mn->vendor.cd = cd;
>> +	return pst_mn;
>>   
>>   out_free_asid:
>> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
>> +	arm_smmu_free_asid(xa, cd);
>>   out_free_cd:
>>   	kfree(cd);
>> +out_free_mn:
>> +	kfree(pst_mn);
>>   out_put_context:
>>   	arm64_mm_context_put(mm);
>>   	return err < 0 ? ERR_PTR(err) : ret;
>>   }
>>   
>> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>> -			     struct arm_smmu_ctx_desc *cd)
>> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>> +				    struct xarray *xa, void *cookie)
> 
> Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
> alloc_shared() returns?

Sure, will update this.

> 
>>   {
>> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
>> +	struct arm_smmu_ctx_desc *cd = cookie;
>> +
>> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>>   		/* Unpin ASID */
>>   		arm64_mm_context_put(cd->mm);
>>   		kfree(cd);
>> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>>   }
>>   
>>   struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
>> -	.alloc	 = arm_smmu_alloc_cd_tables,
>> -	.free	 = arm_smmu_free_cd_tables,
>> -	.prepare = arm_smmu_prepare_cd,
>> -	.write	 = arm_smmu_write_ctx_desc,
>> -	.free_asid = arm_smmu_free_asid,
>> +	.alloc		= arm_smmu_alloc_cd_tables,
>> +	.free		= arm_smmu_free_cd_tables,
>> +	.prepare	= arm_smmu_prepare_cd,
>> +	.write		= arm_smmu_write_ctx_desc,
>> +	.free_asid	= arm_smmu_free_asid,
>> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
>> +	.free_shared	= arm_smmu_free_shared_cd,
>>   };
>>   
>>   struct iommu_pasid_table *
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> index da35d4cc0c1e..ef28d0c409da 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> @@ -13,23 +13,12 @@
>>   #include "../../io-pgtable-arm.h"
>>   #include "../../iommu-pasid-table.h"
>>   
>> -struct arm_smmu_mmu_notifier {
>> -	struct mmu_notifier		mn;
>> -	struct arm_smmu_ctx_desc	*cd;
>> -	bool				cleared;
>> -	refcount_t			refs;
>> -	struct list_head		list;
>> -	struct arm_smmu_domain		*domain;
>> -};
>> -
>> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
>> -
>>   struct arm_smmu_bond {
>> -	struct iommu_sva		sva;
>> -	struct mm_struct		*mm;
>> -	struct arm_smmu_mmu_notifier	*smmu_mn;
>> -	struct list_head		list;
>> -	refcount_t			refs;
>> +	struct iommu_sva			sva;
>> +	struct mm_struct			*mm;
>> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
>> +	struct list_head			list;
>> +	refcount_t				refs;
>>   };
>>   
>>   #define sva_to_bond(handle) \
>> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>>   					 struct mm_struct *mm,
>>   					 unsigned long start, unsigned long end)
>>   {
>> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
>> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
>> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>>   	size_t size = end - start + 1;
>>   
>>   	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
>> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
>> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>>   					    PAGE_SIZE, false, smmu_domain);
>>   	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>>   }
>>   
>>   static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   {
>> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
>> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
>> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>>   	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>>   
>>   	mutex_lock(&sva_lock);
>> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   	 */
>>   	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);
> 
> Another comment for the parent series: I'd prefer making this a
> "iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
> driver and library. Because that won't work if the SMMU driver is a module
> or disabled - build of virtio-iommu will probably fail since quiet_cd will
> be undefined. We could make the library built-in and move quiet_cd there,
> but an explicit library call seems cleaner.

Right, having a separte library method would look cleaner. I will update 
this and the below flush_tlb() call.

> 
>>   
>> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
>> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);
> 
> We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
> should only be called from the library. But with the previous comment,
> this invalidation would move to the library.
> 
>>   	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>>   
>>   	smmu_mn->cleared = true;
>> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   
>>   static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>>   {
>> -	kfree(mn_to_smmu(mn));
>> +	kfree(mn_to_pstiommu(mn));
>>   }
>>   
>>   static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>>   };
>>   
>>   /* Allocate or get existing MMU notifier for this {domain, mm} pair */
>> -static struct arm_smmu_mmu_notifier *
>> +static struct iommu_psdtable_mmu_notifier *
>>   arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>>   			  struct mm_struct *mm)
>>   {
>>   	int ret;
>> -	struct arm_smmu_ctx_desc *cd;
>> -	struct arm_smmu_mmu_notifier *smmu_mn;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
>> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>>   	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>>   
>> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
>> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>>   		if (smmu_mn->mn.mm == mm) {
>>   			refcount_inc(&smmu_mn->refs);
>>   			return smmu_mn;
>>   		}
>>   	}
>>   
>> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
>> -	if (IS_ERR(cd))
>> -		return ERR_CAST(cd);
>> -
>> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
>> -	if (!smmu_mn) {
>> -		ret = -ENOMEM;
>> -		goto err_free_cd;
>> -	}
>> +	mutex_lock(&arm_smmu_asid_lock);
>> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
>> +					      smmu->asid_bits);
>> +	mutex_unlock(&arm_smmu_asid_lock);
>> +	if (IS_ERR(smmu_mn))
>> +		return ERR_CAST(smmu_mn);
>>   
>>   	refcount_set(&smmu_mn->refs, 1);
>> -	smmu_mn->cd = cd;
>> -	smmu_mn->domain = smmu_domain;
>> +	smmu_mn->cookie = smmu_domain;
>>   	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>>   
>>   	ret = mmu_notifier_register(&smmu_mn->mn, mm);
>> -	if (ret) {
>> -		kfree(smmu_mn);
>> +	if (ret)
>>   		goto err_free_cd;
>> -	}
>>   
>> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
>> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
>> +				   smmu_mn->vendor.cd);
> 
> Pass smmu_mn here, and let the library code get the cd (to allow for other
> pasid table implementations)

Okay.

> 
>>   	if (ret)
>>   		goto err_put_notifier;
>>   
>> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
>> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);
> 
> I'd keep the mmu_notifiers list in domain if the library doesn't use it
> for anything.
> 
> That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
> could move to the library, since the virtio-iommu version seems to be the
> same. They probably belong in iommu-sva-lib but we can revisit that when
> there are more users.

Yea, I will move these notifier calls to the library. This makes it 
easier for virtio-iommu too.

Best regards
Vivek

> 
> Thanks,
> Jean
> 

[snip]

>> -- 
>> 2.17.1
>>

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-09-30  9:50       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:50 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

Hi Jean,


On 9/21/21 9:37 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
>> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
>> start using them for arm-smmu-v3-sva implementation.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>>   .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>>   4 files changed, 73 insertions(+), 96 deletions(-)
>>
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> index 537b7c784d40..b87829796596 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>>    * descriptor is using it, try to replace it.
>>    */
>>   static struct arm_smmu_ctx_desc *
>> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
>> +		    struct xarray *xa, u16 asid, u32 asid_bits)
> 
> xa and asid_bits could be stored in some arch-specific section of the
> iommu_pasid_table struct. Other table drivers wouldn't need those
> arguments.

Okay, will move them to a separate structure section.

> 
> More a comment for the parent series: it may be clearer to give a
> different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
> Reading this patch I'm a little confused by what belongs in the IOMMU
> driver and what is done by this library. (I also keep reading 'tbl' as
> 'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
> space)

Yea, this may be confusing. I will fix these namings in my next version.

> 
>>   {
>>   	int ret;
>>   	u32 new_asid;
>>   	struct arm_smmu_ctx_desc *cd;
>> -	struct arm_smmu_device *smmu;
>> -	struct arm_smmu_domain *smmu_domain;
>> -	struct iommu_pasid_table *tbl;
>>   
>> -	cd = xa_load(&arm_smmu_asid_xa, asid);
>> +	cd = xa_load(xa, asid);
>>   	if (!cd)
>>   		return NULL;
>>   
>> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>>   		return cd;
>>   	}
>>   
>> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
>> -	smmu = smmu_domain->smmu;
>> -	tbl = smmu_domain->tbl;
>> -
>> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
>> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
>> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
>> +		       GFP_KERNEL);
>>   	if (ret)
>>   		return ERR_PTR(-ENOSPC);
>>   	/*
>> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>>   	 * be some overlap between use of both ASIDs, until we invalidate the
>>   	 * TLB.
>>   	 */
>> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
>> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>>   	if (ret)
>>   		return ERR_PTR(-ENOSYS);
>>   
>>   	/* Invalidate TLB entries previously associated with that context */
>> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
>> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>>   
>> -	xa_erase(&arm_smmu_asid_xa, asid);
>> +	xa_erase(xa, asid);
>>   	return NULL;
>>   }
>>   
>> -struct arm_smmu_ctx_desc *
>> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>> +static struct iommu_psdtable_mmu_notifier *
>> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
>> +			 struct xarray *xa, u32 asid_bits)
>>   {
>>   	u16 asid;
>>   	int err = 0;
>>   	u64 tcr, par, reg;
>>   	struct arm_smmu_ctx_desc *cd;
>>   	struct arm_smmu_ctx_desc *ret = NULL;
>> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>>   
>>   	asid = arm64_mm_context_get(mm);
>>   	if (!asid)
>>   		return ERR_PTR(-ESRCH);
>>   
>> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
>> +	if (!pst_mn) {
>> +		err = -ENOMEM;
>> +		goto out_put_context;
>> +	}
>> +
>>   	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>>   	if (!cd) {
>>   		err = -ENOMEM;
>> -		goto out_put_context;
>> +		goto out_free_mn;
>>   	}
>>   
>>   	refcount_set(&cd->refs, 1);
>>   
>> -	mutex_lock(&arm_smmu_asid_lock);
>> -	ret = arm_smmu_share_asid(mm, asid);
>> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>>   	if (ret) {
>> -		mutex_unlock(&arm_smmu_asid_lock);
>>   		goto out_free_cd;
>>   	}
>>   
>> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
>> -	mutex_unlock(&arm_smmu_asid_lock);
>> -
>> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>>   	if (err)
>>   		goto out_free_asid;
>>   
>> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>>   	cd->asid = asid;
>>   	cd->mm = mm;
>>   
>> -	return cd;
>> +	pst_mn->vendor.cd = cd;
>> +	return pst_mn;
>>   
>>   out_free_asid:
>> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
>> +	arm_smmu_free_asid(xa, cd);
>>   out_free_cd:
>>   	kfree(cd);
>> +out_free_mn:
>> +	kfree(pst_mn);
>>   out_put_context:
>>   	arm64_mm_context_put(mm);
>>   	return err < 0 ? ERR_PTR(err) : ret;
>>   }
>>   
>> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>> -			     struct arm_smmu_ctx_desc *cd)
>> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>> +				    struct xarray *xa, void *cookie)
> 
> Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
> alloc_shared() returns?

Sure, will update this.

> 
>>   {
>> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
>> +	struct arm_smmu_ctx_desc *cd = cookie;
>> +
>> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>>   		/* Unpin ASID */
>>   		arm64_mm_context_put(cd->mm);
>>   		kfree(cd);
>> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>>   }
>>   
>>   struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
>> -	.alloc	 = arm_smmu_alloc_cd_tables,
>> -	.free	 = arm_smmu_free_cd_tables,
>> -	.prepare = arm_smmu_prepare_cd,
>> -	.write	 = arm_smmu_write_ctx_desc,
>> -	.free_asid = arm_smmu_free_asid,
>> +	.alloc		= arm_smmu_alloc_cd_tables,
>> +	.free		= arm_smmu_free_cd_tables,
>> +	.prepare	= arm_smmu_prepare_cd,
>> +	.write		= arm_smmu_write_ctx_desc,
>> +	.free_asid	= arm_smmu_free_asid,
>> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
>> +	.free_shared	= arm_smmu_free_shared_cd,
>>   };
>>   
>>   struct iommu_pasid_table *
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> index da35d4cc0c1e..ef28d0c409da 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> @@ -13,23 +13,12 @@
>>   #include "../../io-pgtable-arm.h"
>>   #include "../../iommu-pasid-table.h"
>>   
>> -struct arm_smmu_mmu_notifier {
>> -	struct mmu_notifier		mn;
>> -	struct arm_smmu_ctx_desc	*cd;
>> -	bool				cleared;
>> -	refcount_t			refs;
>> -	struct list_head		list;
>> -	struct arm_smmu_domain		*domain;
>> -};
>> -
>> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
>> -
>>   struct arm_smmu_bond {
>> -	struct iommu_sva		sva;
>> -	struct mm_struct		*mm;
>> -	struct arm_smmu_mmu_notifier	*smmu_mn;
>> -	struct list_head		list;
>> -	refcount_t			refs;
>> +	struct iommu_sva			sva;
>> +	struct mm_struct			*mm;
>> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
>> +	struct list_head			list;
>> +	refcount_t				refs;
>>   };
>>   
>>   #define sva_to_bond(handle) \
>> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>>   					 struct mm_struct *mm,
>>   					 unsigned long start, unsigned long end)
>>   {
>> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
>> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
>> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>>   	size_t size = end - start + 1;
>>   
>>   	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
>> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
>> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>>   					    PAGE_SIZE, false, smmu_domain);
>>   	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>>   }
>>   
>>   static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   {
>> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
>> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
>> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>>   	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>>   
>>   	mutex_lock(&sva_lock);
>> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   	 */
>>   	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);
> 
> Another comment for the parent series: I'd prefer making this a
> "iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
> driver and library. Because that won't work if the SMMU driver is a module
> or disabled - build of virtio-iommu will probably fail since quiet_cd will
> be undefined. We could make the library built-in and move quiet_cd there,
> but an explicit library call seems cleaner.

Right, having a separte library method would look cleaner. I will update 
this and the below flush_tlb() call.

> 
>>   
>> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
>> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);
> 
> We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
> should only be called from the library. But with the previous comment,
> this invalidation would move to the library.
> 
>>   	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>>   
>>   	smmu_mn->cleared = true;
>> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   
>>   static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>>   {
>> -	kfree(mn_to_smmu(mn));
>> +	kfree(mn_to_pstiommu(mn));
>>   }
>>   
>>   static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>>   };
>>   
>>   /* Allocate or get existing MMU notifier for this {domain, mm} pair */
>> -static struct arm_smmu_mmu_notifier *
>> +static struct iommu_psdtable_mmu_notifier *
>>   arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>>   			  struct mm_struct *mm)
>>   {
>>   	int ret;
>> -	struct arm_smmu_ctx_desc *cd;
>> -	struct arm_smmu_mmu_notifier *smmu_mn;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
>> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>>   	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>>   
>> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
>> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>>   		if (smmu_mn->mn.mm == mm) {
>>   			refcount_inc(&smmu_mn->refs);
>>   			return smmu_mn;
>>   		}
>>   	}
>>   
>> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
>> -	if (IS_ERR(cd))
>> -		return ERR_CAST(cd);
>> -
>> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
>> -	if (!smmu_mn) {
>> -		ret = -ENOMEM;
>> -		goto err_free_cd;
>> -	}
>> +	mutex_lock(&arm_smmu_asid_lock);
>> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
>> +					      smmu->asid_bits);
>> +	mutex_unlock(&arm_smmu_asid_lock);
>> +	if (IS_ERR(smmu_mn))
>> +		return ERR_CAST(smmu_mn);
>>   
>>   	refcount_set(&smmu_mn->refs, 1);
>> -	smmu_mn->cd = cd;
>> -	smmu_mn->domain = smmu_domain;
>> +	smmu_mn->cookie = smmu_domain;
>>   	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>>   
>>   	ret = mmu_notifier_register(&smmu_mn->mn, mm);
>> -	if (ret) {
>> -		kfree(smmu_mn);
>> +	if (ret)
>>   		goto err_free_cd;
>> -	}
>>   
>> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
>> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
>> +				   smmu_mn->vendor.cd);
> 
> Pass smmu_mn here, and let the library code get the cd (to allow for other
> pasid table implementations)

Okay.

> 
>>   	if (ret)
>>   		goto err_put_notifier;
>>   
>> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
>> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);
> 
> I'd keep the mmu_notifiers list in domain if the library doesn't use it
> for anything.
> 
> That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
> could move to the library, since the virtio-iommu version seems to be the
> same. They probably belong in iommu-sva-lib but we can revisit that when
> there are more users.

Yea, I will move these notifier calls to the library. This makes it 
easier for virtio-iommu too.

Best regards
Vivek

> 
> Thanks,
> Jean
> 

[snip]

>> -- 
>> 2.17.1
>>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops
@ 2021-09-30  9:50       ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30  9:50 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

Hi Jean,


On 9/21/21 9:37 PM, Jean-Philippe Brucker wrote:
> On Fri, Apr 23, 2021 at 03:21:44PM +0530, Vivek Gautam wrote:
>> Implementing the alloc_shared_cd and free_shared_cd in cd-lib, and
>> start using them for arm-smmu-v3-sva implementation.
>>
>> Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
>> ---
>>   .../arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c      | 71 ++++++++--------
>>   .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 83 ++++++++-----------
>>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  1 -
>>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 14 ----
>>   4 files changed, 73 insertions(+), 96 deletions(-)
>>
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> index 537b7c784d40..b87829796596 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-cd-lib.c
>> @@ -285,16 +285,14 @@ static bool arm_smmu_free_asid(struct xarray *xa, void *cookie_cd)
>>    * descriptor is using it, try to replace it.
>>    */
>>   static struct arm_smmu_ctx_desc *
>> -arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>> +arm_smmu_share_asid(struct iommu_pasid_table *tbl, struct mm_struct *mm,
>> +		    struct xarray *xa, u16 asid, u32 asid_bits)
> 
> xa and asid_bits could be stored in some arch-specific section of the
> iommu_pasid_table struct. Other table drivers wouldn't need those
> arguments.

Okay, will move them to a separate structure section.

> 
> More a comment for the parent series: it may be clearer to give a
> different prefix to functions in this file (arm_smmu_cd_, pst_arm_?).
> Reading this patch I'm a little confused by what belongs in the IOMMU
> driver and what is done by this library. (I also keep reading 'tbl' as
> 'tlb'. Maybe we could make it 'table' since that doesn't take a lot more
> space)

Yea, this may be confusing. I will fix these namings in my next version.

> 
>>   {
>>   	int ret;
>>   	u32 new_asid;
>>   	struct arm_smmu_ctx_desc *cd;
>> -	struct arm_smmu_device *smmu;
>> -	struct arm_smmu_domain *smmu_domain;
>> -	struct iommu_pasid_table *tbl;
>>   
>> -	cd = xa_load(&arm_smmu_asid_xa, asid);
>> +	cd = xa_load(xa, asid);
>>   	if (!cd)
>>   		return NULL;
>>   
>> @@ -306,12 +304,8 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>>   		return cd;
>>   	}
>>   
>> -	smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
>> -	smmu = smmu_domain->smmu;
>> -	tbl = smmu_domain->tbl;
>> -
>> -	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
>> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
>> +	ret = xa_alloc(xa, &new_asid, cd, XA_LIMIT(1, (1 << asid_bits) - 1),
>> +		       GFP_KERNEL);
>>   	if (ret)
>>   		return ERR_PTR(-ENOSPC);
>>   	/*
>> @@ -325,48 +319,52 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
>>   	 * be some overlap between use of both ASIDs, until we invalidate the
>>   	 * TLB.
>>   	 */
>> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, 0, cd);
>> +	ret = arm_smmu_write_ctx_desc(&tbl->cfg, 0, cd);
>>   	if (ret)
>>   		return ERR_PTR(-ENOSYS);
>>   
>>   	/* Invalidate TLB entries previously associated with that context */
>> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, asid);
>> +	iommu_psdtable_flush_tlb(tbl, tbl->cookie, asid);
>>   
>> -	xa_erase(&arm_smmu_asid_xa, asid);
>> +	xa_erase(xa, asid);
>>   	return NULL;
>>   }
>>   
>> -struct arm_smmu_ctx_desc *
>> -arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>> +static struct iommu_psdtable_mmu_notifier *
>> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm,
>> +			 struct xarray *xa, u32 asid_bits)
>>   {
>>   	u16 asid;
>>   	int err = 0;
>>   	u64 tcr, par, reg;
>>   	struct arm_smmu_ctx_desc *cd;
>>   	struct arm_smmu_ctx_desc *ret = NULL;
>> +	struct iommu_psdtable_mmu_notifier *pst_mn;
>>   
>>   	asid = arm64_mm_context_get(mm);
>>   	if (!asid)
>>   		return ERR_PTR(-ESRCH);
>>   
>> +	pst_mn = kzalloc(sizeof(*pst_mn), GFP_KERNEL);
>> +	if (!pst_mn) {
>> +		err = -ENOMEM;
>> +		goto out_put_context;
>> +	}
>> +
>>   	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
>>   	if (!cd) {
>>   		err = -ENOMEM;
>> -		goto out_put_context;
>> +		goto out_free_mn;
>>   	}
>>   
>>   	refcount_set(&cd->refs, 1);
>>   
>> -	mutex_lock(&arm_smmu_asid_lock);
>> -	ret = arm_smmu_share_asid(mm, asid);
>> +	ret = arm_smmu_share_asid(tbl, mm, xa, asid, asid_bits);
>>   	if (ret) {
>> -		mutex_unlock(&arm_smmu_asid_lock);
>>   		goto out_free_cd;
>>   	}
>>   
>> -	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
>> -	mutex_unlock(&arm_smmu_asid_lock);
>> -
>> +	err = xa_insert(xa, asid, cd, GFP_KERNEL);
>>   	if (err)
>>   		goto out_free_asid;
>>   
>> @@ -406,21 +404,26 @@ arm_smmu_alloc_shared_cd(struct iommu_pasid_table *tbl, struct mm_struct *mm)
>>   	cd->asid = asid;
>>   	cd->mm = mm;
>>   
>> -	return cd;
>> +	pst_mn->vendor.cd = cd;
>> +	return pst_mn;
>>   
>>   out_free_asid:
>> -	iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd);
>> +	arm_smmu_free_asid(xa, cd);
>>   out_free_cd:
>>   	kfree(cd);
>> +out_free_mn:
>> +	kfree(pst_mn);
>>   out_put_context:
>>   	arm64_mm_context_put(mm);
>>   	return err < 0 ? ERR_PTR(err) : ret;
>>   }
>>   
>> -void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>> -			     struct arm_smmu_ctx_desc *cd)
>> +static void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>> +				    struct xarray *xa, void *cookie)
> 
> Could we pass a struct iommu_psdtable_mmu_notifier, since that's what
> alloc_shared() returns?

Sure, will update this.

> 
>>   {
>> -	if (iommu_psdtable_free_asid(tbl, &arm_smmu_asid_xa, cd)) {
>> +	struct arm_smmu_ctx_desc *cd = cookie;
>> +
>> +	if (iommu_psdtable_free_asid(tbl, xa, cd)) {
>>   		/* Unpin ASID */
>>   		arm64_mm_context_put(cd->mm);
>>   		kfree(cd);
>> @@ -428,11 +431,13 @@ void arm_smmu_free_shared_cd(struct iommu_pasid_table *tbl,
>>   }
>>   
>>   struct iommu_vendor_psdtable_ops arm_cd_table_ops = {
>> -	.alloc	 = arm_smmu_alloc_cd_tables,
>> -	.free	 = arm_smmu_free_cd_tables,
>> -	.prepare = arm_smmu_prepare_cd,
>> -	.write	 = arm_smmu_write_ctx_desc,
>> -	.free_asid = arm_smmu_free_asid,
>> +	.alloc		= arm_smmu_alloc_cd_tables,
>> +	.free		= arm_smmu_free_cd_tables,
>> +	.prepare	= arm_smmu_prepare_cd,
>> +	.write		= arm_smmu_write_ctx_desc,
>> +	.free_asid	= arm_smmu_free_asid,
>> +	.alloc_shared	= arm_smmu_alloc_shared_cd,
>> +	.free_shared	= arm_smmu_free_shared_cd,
>>   };
>>   
>>   struct iommu_pasid_table *
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> index da35d4cc0c1e..ef28d0c409da 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
>> @@ -13,23 +13,12 @@
>>   #include "../../io-pgtable-arm.h"
>>   #include "../../iommu-pasid-table.h"
>>   
>> -struct arm_smmu_mmu_notifier {
>> -	struct mmu_notifier		mn;
>> -	struct arm_smmu_ctx_desc	*cd;
>> -	bool				cleared;
>> -	refcount_t			refs;
>> -	struct list_head		list;
>> -	struct arm_smmu_domain		*domain;
>> -};
>> -
>> -#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
>> -
>>   struct arm_smmu_bond {
>> -	struct iommu_sva		sva;
>> -	struct mm_struct		*mm;
>> -	struct arm_smmu_mmu_notifier	*smmu_mn;
>> -	struct list_head		list;
>> -	refcount_t			refs;
>> +	struct iommu_sva			sva;
>> +	struct mm_struct			*mm;
>> +	struct iommu_psdtable_mmu_notifier	*smmu_mn;
>> +	struct list_head			list;
>> +	refcount_t				refs;
>>   };
>>   
>>   #define sva_to_bond(handle) \
>> @@ -41,20 +30,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
>>   					 struct mm_struct *mm,
>>   					 unsigned long start, unsigned long end)
>>   {
>> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
>> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
>> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>>   	size_t size = end - start + 1;
>>   
>>   	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
>> -		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
>> +		arm_smmu_tlb_inv_range_asid(start, size, cd->asid,
>>   					    PAGE_SIZE, false, smmu_domain);
>>   	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
>>   }
>>   
>>   static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   {
>> -	struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
>> -	struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn = mn_to_pstiommu(mn);
>> +	struct arm_smmu_domain *smmu_domain = smmu_mn->cookie;
>> +	struct arm_smmu_ctx_desc *cd = smmu_mn->vendor.cd;
>>   	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>>   
>>   	mutex_lock(&sva_lock);
>> @@ -69,7 +60,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   	 */
>>   	iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, &quiet_cd);
> 
> Another comment for the parent series: I'd prefer making this a
> "iommu_psdtable_quiesce()" call, instead of passing "quiet_cd" between
> driver and library. Because that won't work if the SMMU driver is a module
> or disabled - build of virtio-iommu will probably fail since quiet_cd will
> be undefined. We could make the library built-in and move quiet_cd there,
> but an explicit library call seems cleaner.

Right, having a separte library method would look cleaner. I will update 
this and the below flush_tlb() call.

> 
>>   
>> -	iommu_psdtable_flush_tlb(tbl, smmu_domain, smmu_mn->cd->asid);
>> +	iommu_psdtable_flush_tlb(tbl, smmu_domain, cd->asid);
> 
> We can directly call arm_smmu_tlb_inv* here. iommu_psdtable_flush_tlb()
> should only be called from the library. But with the previous comment,
> this invalidation would move to the library.
> 
>>   	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>>   
>>   	smmu_mn->cleared = true;
>> @@ -78,7 +69,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>>   
>>   static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
>>   {
>> -	kfree(mn_to_smmu(mn));
>> +	kfree(mn_to_pstiommu(mn));
>>   }
>>   
>>   static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>> @@ -88,63 +79,59 @@ static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
>>   };
>>   
>>   /* Allocate or get existing MMU notifier for this {domain, mm} pair */
>> -static struct arm_smmu_mmu_notifier *
>> +static struct iommu_psdtable_mmu_notifier *
>>   arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
>>   			  struct mm_struct *mm)
>>   {
>>   	int ret;
>> -	struct arm_smmu_ctx_desc *cd;
>> -	struct arm_smmu_mmu_notifier *smmu_mn;
>> +	struct iommu_psdtable_mmu_notifier *smmu_mn;
>> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
>>   	struct iommu_pasid_table *tbl = smmu_domain->tbl;
>>   
>> -	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
>> +	list_for_each_entry(smmu_mn, &tbl->mmu_notifiers, list) {
>>   		if (smmu_mn->mn.mm == mm) {
>>   			refcount_inc(&smmu_mn->refs);
>>   			return smmu_mn;
>>   		}
>>   	}
>>   
>> -	cd = arm_smmu_alloc_shared_cd(tbl, mm);
>> -	if (IS_ERR(cd))
>> -		return ERR_CAST(cd);
>> -
>> -	smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
>> -	if (!smmu_mn) {
>> -		ret = -ENOMEM;
>> -		goto err_free_cd;
>> -	}
>> +	mutex_lock(&arm_smmu_asid_lock);
>> +	smmu_mn = iommu_psdtable_alloc_shared(tbl, mm, &arm_smmu_asid_xa,
>> +					      smmu->asid_bits);
>> +	mutex_unlock(&arm_smmu_asid_lock);
>> +	if (IS_ERR(smmu_mn))
>> +		return ERR_CAST(smmu_mn);
>>   
>>   	refcount_set(&smmu_mn->refs, 1);
>> -	smmu_mn->cd = cd;
>> -	smmu_mn->domain = smmu_domain;
>> +	smmu_mn->cookie = smmu_domain;
>>   	smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
>>   
>>   	ret = mmu_notifier_register(&smmu_mn->mn, mm);
>> -	if (ret) {
>> -		kfree(smmu_mn);
>> +	if (ret)
>>   		goto err_free_cd;
>> -	}
>>   
>> -	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, cd);
>> +	ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid,
>> +				   smmu_mn->vendor.cd);
> 
> Pass smmu_mn here, and let the library code get the cd (to allow for other
> pasid table implementations)

Okay.

> 
>>   	if (ret)
>>   		goto err_put_notifier;
>>   
>> -	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
>> +	list_add(&smmu_mn->list, &tbl->mmu_notifiers);
> 
> I'd keep the mmu_notifiers list in domain if the library doesn't use it
> for anything.
> 
> That made me wonder whether the whole of arm_smmu_mmu_notifer_get/put()
> could move to the library, since the virtio-iommu version seems to be the
> same. They probably belong in iommu-sva-lib but we can revisit that when
> there are more users.

Yea, I will move these notifier calls to the library. This makes it 
easier for virtio-iommu too.

Best regards
Vivek

> 
> Thanks,
> Jean
> 

[snip]

>> -- 
>> 2.17.1
>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
  2021-09-30  8:49         ` Jean-Philippe Brucker
  (?)
@ 2021-09-30 10:57           ` Vivek Kumar Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30 10:57 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson



On 9/30/21 2:19 PM, Jean-Philippe Brucker wrote:
> On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
>>> I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
>>> PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
>>> is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
>>> seems more than enough.
>>
>> Right, I will update this to 16-bits field. It won't be okay to update the
>> iommu uAPI now, right?
> 
> Since there is no UAPI transport for the fault request/response at the
> moment, it should be possible to update it.

Righty! I will add a patch for that too then.



Thanks & regards

Vivek

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30 10:57           ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30 10:57 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, Alex Williamson, mst, will.deacon, linux-kernel,
	virtualization, iommu, robin.murphy, linux-arm-kernel



On 9/30/21 2:19 PM, Jean-Philippe Brucker wrote:
> On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
>>> I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
>>> PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
>>> is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
>>> seems more than enough.
>>
>> Right, I will update this to 16-bits field. It won't be okay to update the
>> iommu uAPI now, right?
> 
> Since there is no UAPI transport for the fault request/response at the
> moment, it should be possible to update it.

Righty! I will add a patch for that too then.



Thanks & regards

Vivek
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information
@ 2021-09-30 10:57           ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-09-30 10:57 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi, Alex Williamson



On 9/30/21 2:19 PM, Jean-Philippe Brucker wrote:
> On Thu, Sep 30, 2021 at 10:26:35AM +0530, Vivek Kumar Gautam wrote:
>>> I'm not sure why we made it 32-bit in Linux UAPI, it's a little wasteful.
>>> PCIe PRGI is 9-bits and SMMU STAG is 16-bits. Since the scope of the grpid
>>> is the endpoint, 16-bit means 64k in-flight faults per endpoint, which
>>> seems more than enough.
>>
>> Right, I will update this to 16-bits field. It won't be okay to update the
>> iommu uAPI now, right?
> 
> Since there is no UAPI transport for the fault request/response at the
> moment, it should be possible to update it.

Righty! I will add a patch for that too then.



Thanks & regards

Vivek

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
  2021-09-30  9:24       ` Vivek Kumar Gautam
  (?)
  (?)
@ 2021-10-06  9:55         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-06  9:55 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Thu, Sep 30, 2021 at 02:54:05PM +0530, Vivek Kumar Gautam wrote:
> > > +struct virtio_iommu_req_page_resp {
> > > +	struct virtio_iommu_req_head		head;
> > > +	__le32					domain;
> > 
> > I don't think we need this field, since the fault report doesn't come with
> > a domain.
> 
> But here we are sending the response which would be consumed by the vfio
> ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp"
> request in the virtio/iommu driver, extracting the domain from it, and using
> that to call the respective "page_response" ops from "vfio_iommu_ops" in the
> vfio/core driver.
> 
> Is this incorrect way of passing on the page-response back to the host
> kernel?

That works for the host userspace-kernel interface because the device is
always attached to a VFIO container.

For virtio-iommu the domain info is redundant. The endpoint information
needs to be kept through the whole response path in order to target the
right endpoint in the end. In addition the guest could enable PRI without
attaching the endpoint to a domain, or fail to disable PRI before
detaching the endpoint. Sure it's weird, but the host can still inject the
recoverable page fault in this case, and the guest answers with "invalid"
status but no domain. We could mandate domains for recoverable faults but
that forces a synchronization against attach/detach and I think it
needlessly deviates from other IOMMUs.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-10-06  9:55         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-06  9:55 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: kevin.tian, mst, will.deacon, linux-kernel, virtualization,
	iommu, robin.murphy, linux-arm-kernel

On Thu, Sep 30, 2021 at 02:54:05PM +0530, Vivek Kumar Gautam wrote:
> > > +struct virtio_iommu_req_page_resp {
> > > +	struct virtio_iommu_req_head		head;
> > > +	__le32					domain;
> > 
> > I don't think we need this field, since the fault report doesn't come with
> > a domain.
> 
> But here we are sending the response which would be consumed by the vfio
> ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp"
> request in the virtio/iommu driver, extracting the domain from it, and using
> that to call the respective "page_response" ops from "vfio_iommu_ops" in the
> vfio/core driver.
> 
> Is this incorrect way of passing on the page-response back to the host
> kernel?

That works for the host userspace-kernel interface because the device is
always attached to a VFIO container.

For virtio-iommu the domain info is redundant. The endpoint information
needs to be kept through the whole response path in order to target the
right endpoint in the end. In addition the guest could enable PRI without
attaching the endpoint to a domain, or fail to disable PRI before
detaching the endpoint. Sure it's weird, but the host can still inject the
recoverable page fault in this case, and the guest answers with "invalid"
status but no domain. We could mandate domains for recoverable faults but
that forces a synchronization against attach/detach and I think it
needlessly deviates from other IOMMUs.

Thanks,
Jean
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-10-06  9:55         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-06  9:55 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: jacob.jun.pan, mst, joro, will.deacon, linux-kernel,
	shameerali.kolothum.thodi, virtualization, eric.auger, iommu,
	yi.l.liu, Lorenzo.Pieralisi, robin.murphy, linux-arm-kernel

On Thu, Sep 30, 2021 at 02:54:05PM +0530, Vivek Kumar Gautam wrote:
> > > +struct virtio_iommu_req_page_resp {
> > > +	struct virtio_iommu_req_head		head;
> > > +	__le32					domain;
> > 
> > I don't think we need this field, since the fault report doesn't come with
> > a domain.
> 
> But here we are sending the response which would be consumed by the vfio
> ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp"
> request in the virtio/iommu driver, extracting the domain from it, and using
> that to call the respective "page_response" ops from "vfio_iommu_ops" in the
> vfio/core driver.
> 
> Is this incorrect way of passing on the page-response back to the host
> kernel?

That works for the host userspace-kernel interface because the device is
always attached to a VFIO container.

For virtio-iommu the domain info is redundant. The endpoint information
needs to be kept through the whole response path in order to target the
right endpoint in the end. In addition the guest could enable PRI without
attaching the endpoint to a domain, or fail to disable PRI before
detaching the endpoint. Sure it's weird, but the host can still inject the
recoverable page fault in this case, and the guest answers with "invalid"
status but no domain. We could mandate domains for recoverable faults but
that forces a synchronization against attach/detach and I think it
needlessly deviates from other IOMMUs.

Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response
@ 2021-10-06  9:55         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-06  9:55 UTC (permalink / raw)
  To: Vivek Kumar Gautam
  Cc: linux-kernel, linux-arm-kernel, iommu, virtualization, joro,
	will.deacon, mst, robin.murphy, eric.auger, kevin.tian,
	jacob.jun.pan, yi.l.liu, Lorenzo.Pieralisi,
	shameerali.kolothum.thodi

On Thu, Sep 30, 2021 at 02:54:05PM +0530, Vivek Kumar Gautam wrote:
> > > +struct virtio_iommu_req_page_resp {
> > > +	struct virtio_iommu_req_head		head;
> > > +	__le32					domain;
> > 
> > I don't think we need this field, since the fault report doesn't come with
> > a domain.
> 
> But here we are sending the response which would be consumed by the vfio
> ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp"
> request in the virtio/iommu driver, extracting the domain from it, and using
> that to call the respective "page_response" ops from "vfio_iommu_ops" in the
> vfio/core driver.
> 
> Is this incorrect way of passing on the page-response back to the host
> kernel?

That works for the host userspace-kernel interface because the device is
always attached to a VFIO container.

For virtio-iommu the domain info is redundant. The endpoint information
needs to be kept through the whole response path in order to target the
right endpoint in the end. In addition the guest could enable PRI without
attaching the endpoint to a domain, or fail to disable PRI before
detaching the endpoint. Sure it's weird, but the host can still inject the
recoverable page fault in this case, and the guest answers with "invalid"
status but no domain. We could mandate domains for recoverable faults but
that forces a synchronization against attach/detach and I think it
needlessly deviates from other IOMMUs.

Thanks,
Jean

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
  2021-09-21 16:03     ` Jean-Philippe Brucker
  (?)
@ 2021-10-11  8:11       ` Vivek Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-10-11  8:11 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	Robin Murphy, Linux ARM

Hi Jean,


On Tue, Sep 21, 2021 at 9:33 PM Jean-Philippe Brucker
<jean-philippe@linaro.org> wrote:
>
> On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> > Redirect the incoming page faults to the registered fault handler
> > that can take the fault information such as, pasid, page request
> > group-id, address and pasid flags.
> >
> > Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> > ---
> >  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
> >  include/uapi/linux/virtio_iommu.h |  1 +
> >  2 files changed, 80 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> > index c970f386f031..fd237cad1ce5 100644
> > --- a/drivers/iommu/virtio-iommu.c
> > +++ b/drivers/iommu/virtio-iommu.c
> > @@ -37,6 +37,13 @@
> >  /* Some architectures need an Address Space ID for each page table */
> >  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
> >
> > +struct viommu_dev_pri_work {
> > +     struct work_struct              work;
> > +     struct viommu_dev               *dev;
> > +     struct virtio_iommu_fault       *vfault;
> > +     u32                             endpoint;
> > +};
> > +
> >  struct viommu_dev {
> >       struct iommu_device             iommu;
> >       struct device                   *dev;
> > @@ -49,6 +56,8 @@ struct viommu_dev {
> >       struct list_head                requests;
> >       void                            *evts;
> >       struct list_head                endpoints;
> > +     struct workqueue_struct         *pri_wq;
> > +     struct viommu_dev_pri_work      *pri_work;
>
> IOPF already has a workqueue, so the driver doesn't need one.
> iommu_report_device_fault() should be fast enough to be called from the
> event handler.

Sure, will call iommu_report_device_fault() directly from
viommu_fault_handler().

>
> >
> >       /* Device configuration */
> >       struct iommu_domain_geometry    geometry;
> > @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> >       return ret;
> >  }
> >
> > +static void viommu_handle_ppr(struct work_struct *work)
> > +{
> > +     struct viommu_dev_pri_work *pwork =
> > +                             container_of(work, struct viommu_dev_pri_work, work);
> > +     struct viommu_dev *viommu = pwork->dev;
> > +     struct virtio_iommu_fault *vfault = pwork->vfault;
> > +     struct viommu_endpoint *vdev;
> > +     struct viommu_ep_entry *ep;
> > +     struct iommu_fault_event fault_evt = {
> > +             .fault.type = IOMMU_FAULT_PAGE_REQ,
> > +     };
> > +     struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> > +
> > +     u32 flags       = le32_to_cpu(vfault->flags);
> > +     u32 prq_flags   = le32_to_cpu(vfault->pr_evt_flags);
> > +     u32 endpoint    = pwork->endpoint;
> > +
> > +     memset(prq, 0, sizeof(struct iommu_fault_page_request));
>
> The fault_evt struct is already initialized

Right, I will remove this line.

>
> > +     prq->addr = le64_to_cpu(vfault->address);
> > +
> > +     if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> > +             prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> > +     if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> > +             prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> > +             prq->pasid = le32_to_cpu(vfault->pasid);
> > +             prq->grpid = le32_to_cpu(vfault->grpid);
> > +     }
> > +
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> > +             prq->perm |= IOMMU_FAULT_PERM_READ;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> > +             prq->perm |= IOMMU_FAULT_PERM_WRITE;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> > +             prq->perm |= IOMMU_FAULT_PERM_EXEC;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> > +             prq->perm |= IOMMU_FAULT_PERM_PRIV;
> > +
> > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > +             if (ep->eid == endpoint) {
> > +                     vdev = ep->vdev;

I have a question here though -
Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
per 'viommu_domain'?
If it is per 'viommu_domain' then the above list is also incorrect.
As you pointed to in the patch [1] -
[PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
by viommu_dev
I am planning to add endpoint ID into a static global xarray in
viommu_probe_device() as below:

        vdev_for_each_id(i, eid, vdev) {
                ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
                if (ret)
                        goto err_free_dev;
        }

and replace the above list traversal as below:

                xa_lock_irqsave(&viommu_ep_ids, flags);
                xa_for_each(&viommu_ep_ids, eid, vdev) {
                        if (eid == endpoint) {
                                ret =
iommu_report_device_fault(vdev->dev, &fault_evt);
                                if (ret)
                                        dev_err(vdev->dev, "Couldn't
handle page request\n");
                        }
                }
                xa_unlock_irqrestore(&viommu_ep_ids, flags);

But using a global xarray would also be incorrect if the endpointsID are global
across 'viommu_domain'.

I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
with the correct device.

> > +                     break;
> > +             }
> > +     }
> > +
> > +     if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> > +         (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> > +             prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> > +
> > +     if (iommu_report_device_fault(vdev->dev, &fault_evt))
> > +             dev_err(vdev->dev, "Couldn't handle page request\n");
>
> An error likely means that nobody registered a fault handler, but we could
> display a few more details about the fault that would help debug the
> endpoint

Sure, will add more debug info to this log.

>
> > +}
> > +
> >  static int viommu_fault_handler(struct viommu_dev *viommu,
> >                               struct virtio_iommu_fault *fault)
> >  {
> > @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
> >       u32 pasid       = le32_to_cpu(fault->pasid);
> >
> >       if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> > -             dev_info(viommu->dev, "Page request fault - unhandled\n");
> > +             dev_info_ratelimited(viommu->dev,
> > +                                  "Page request fault from EP %u\n",
> > +                                  endpoint);
>
> That's rather for debugging the virtio-iommu driver, so should be
> dev_dbg() (or removed entirely)

I will remove this log.

>
> > +
> > +             viommu->pri_work->vfault = fault;
> > +             viommu->pri_work->endpoint = endpoint;
> > +             queue_work(viommu->pri_wq, &viommu->pri_work->work);
> >               return 0;
> >       }
> >
> > @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
> >               goto err_free_vqs;
> >       }
> >
> > +     viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> > +     if (!viommu->pri_work)
> > +             return -ENOMEM;
> > +
> > +     viommu->pri_work->dev = viommu;
> > +
> > +     INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> > +     viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> > +     if (!viommu->pri_wq)
> > +             return -ENOMEM;
> > +
> >       viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
> >       viommu->last_domain = ~0U;
> >
> > diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> > index accc3318ce46..53aa88e6b077 100644
> > --- a/include/uapi/linux/virtio_iommu.h
> > +++ b/include/uapi/linux/virtio_iommu.h
> > @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
> >  #define VIRTIO_IOMMU_FAULT_F_READ            (1 << 0)
> >  #define VIRTIO_IOMMU_FAULT_F_WRITE           (1 << 1)
> >  #define VIRTIO_IOMMU_FAULT_F_EXEC            (1 << 2)
> > +#define VIRTIO_IOMMU_FAULT_F_PRIV            (1 << 3)
>
> Should go in the previous patch. (I'd also prefer 'privileged' because in
> this context 'priv' is easily read as 'private')

Sure, will move this to the previous patch.

Thanks & regards
Vivek

[1] https://lore.kernel.org/all/YUoBW13+CvIljUgc@myrica/#t

[snip]

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  8:11       ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-10-11  8:11 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	Robin Murphy, Linux ARM

Hi Jean,


On Tue, Sep 21, 2021 at 9:33 PM Jean-Philippe Brucker
<jean-philippe@linaro.org> wrote:
>
> On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> > Redirect the incoming page faults to the registered fault handler
> > that can take the fault information such as, pasid, page request
> > group-id, address and pasid flags.
> >
> > Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> > ---
> >  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
> >  include/uapi/linux/virtio_iommu.h |  1 +
> >  2 files changed, 80 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> > index c970f386f031..fd237cad1ce5 100644
> > --- a/drivers/iommu/virtio-iommu.c
> > +++ b/drivers/iommu/virtio-iommu.c
> > @@ -37,6 +37,13 @@
> >  /* Some architectures need an Address Space ID for each page table */
> >  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
> >
> > +struct viommu_dev_pri_work {
> > +     struct work_struct              work;
> > +     struct viommu_dev               *dev;
> > +     struct virtio_iommu_fault       *vfault;
> > +     u32                             endpoint;
> > +};
> > +
> >  struct viommu_dev {
> >       struct iommu_device             iommu;
> >       struct device                   *dev;
> > @@ -49,6 +56,8 @@ struct viommu_dev {
> >       struct list_head                requests;
> >       void                            *evts;
> >       struct list_head                endpoints;
> > +     struct workqueue_struct         *pri_wq;
> > +     struct viommu_dev_pri_work      *pri_work;
>
> IOPF already has a workqueue, so the driver doesn't need one.
> iommu_report_device_fault() should be fast enough to be called from the
> event handler.

Sure, will call iommu_report_device_fault() directly from
viommu_fault_handler().

>
> >
> >       /* Device configuration */
> >       struct iommu_domain_geometry    geometry;
> > @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> >       return ret;
> >  }
> >
> > +static void viommu_handle_ppr(struct work_struct *work)
> > +{
> > +     struct viommu_dev_pri_work *pwork =
> > +                             container_of(work, struct viommu_dev_pri_work, work);
> > +     struct viommu_dev *viommu = pwork->dev;
> > +     struct virtio_iommu_fault *vfault = pwork->vfault;
> > +     struct viommu_endpoint *vdev;
> > +     struct viommu_ep_entry *ep;
> > +     struct iommu_fault_event fault_evt = {
> > +             .fault.type = IOMMU_FAULT_PAGE_REQ,
> > +     };
> > +     struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> > +
> > +     u32 flags       = le32_to_cpu(vfault->flags);
> > +     u32 prq_flags   = le32_to_cpu(vfault->pr_evt_flags);
> > +     u32 endpoint    = pwork->endpoint;
> > +
> > +     memset(prq, 0, sizeof(struct iommu_fault_page_request));
>
> The fault_evt struct is already initialized

Right, I will remove this line.

>
> > +     prq->addr = le64_to_cpu(vfault->address);
> > +
> > +     if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> > +             prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> > +     if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> > +             prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> > +             prq->pasid = le32_to_cpu(vfault->pasid);
> > +             prq->grpid = le32_to_cpu(vfault->grpid);
> > +     }
> > +
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> > +             prq->perm |= IOMMU_FAULT_PERM_READ;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> > +             prq->perm |= IOMMU_FAULT_PERM_WRITE;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> > +             prq->perm |= IOMMU_FAULT_PERM_EXEC;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> > +             prq->perm |= IOMMU_FAULT_PERM_PRIV;
> > +
> > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > +             if (ep->eid == endpoint) {
> > +                     vdev = ep->vdev;

I have a question here though -
Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
per 'viommu_domain'?
If it is per 'viommu_domain' then the above list is also incorrect.
As you pointed to in the patch [1] -
[PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
by viommu_dev
I am planning to add endpoint ID into a static global xarray in
viommu_probe_device() as below:

        vdev_for_each_id(i, eid, vdev) {
                ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
                if (ret)
                        goto err_free_dev;
        }

and replace the above list traversal as below:

                xa_lock_irqsave(&viommu_ep_ids, flags);
                xa_for_each(&viommu_ep_ids, eid, vdev) {
                        if (eid == endpoint) {
                                ret =
iommu_report_device_fault(vdev->dev, &fault_evt);
                                if (ret)
                                        dev_err(vdev->dev, "Couldn't
handle page request\n");
                        }
                }
                xa_unlock_irqrestore(&viommu_ep_ids, flags);

But using a global xarray would also be incorrect if the endpointsID are global
across 'viommu_domain'.

I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
with the correct device.

> > +                     break;
> > +             }
> > +     }
> > +
> > +     if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> > +         (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> > +             prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> > +
> > +     if (iommu_report_device_fault(vdev->dev, &fault_evt))
> > +             dev_err(vdev->dev, "Couldn't handle page request\n");
>
> An error likely means that nobody registered a fault handler, but we could
> display a few more details about the fault that would help debug the
> endpoint

Sure, will add more debug info to this log.

>
> > +}
> > +
> >  static int viommu_fault_handler(struct viommu_dev *viommu,
> >                               struct virtio_iommu_fault *fault)
> >  {
> > @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
> >       u32 pasid       = le32_to_cpu(fault->pasid);
> >
> >       if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> > -             dev_info(viommu->dev, "Page request fault - unhandled\n");
> > +             dev_info_ratelimited(viommu->dev,
> > +                                  "Page request fault from EP %u\n",
> > +                                  endpoint);
>
> That's rather for debugging the virtio-iommu driver, so should be
> dev_dbg() (or removed entirely)

I will remove this log.

>
> > +
> > +             viommu->pri_work->vfault = fault;
> > +             viommu->pri_work->endpoint = endpoint;
> > +             queue_work(viommu->pri_wq, &viommu->pri_work->work);
> >               return 0;
> >       }
> >
> > @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
> >               goto err_free_vqs;
> >       }
> >
> > +     viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> > +     if (!viommu->pri_work)
> > +             return -ENOMEM;
> > +
> > +     viommu->pri_work->dev = viommu;
> > +
> > +     INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> > +     viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> > +     if (!viommu->pri_wq)
> > +             return -ENOMEM;
> > +
> >       viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
> >       viommu->last_domain = ~0U;
> >
> > diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> > index accc3318ce46..53aa88e6b077 100644
> > --- a/include/uapi/linux/virtio_iommu.h
> > +++ b/include/uapi/linux/virtio_iommu.h
> > @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
> >  #define VIRTIO_IOMMU_FAULT_F_READ            (1 << 0)
> >  #define VIRTIO_IOMMU_FAULT_F_WRITE           (1 << 1)
> >  #define VIRTIO_IOMMU_FAULT_F_EXEC            (1 << 2)
> > +#define VIRTIO_IOMMU_FAULT_F_PRIV            (1 << 3)
>
> Should go in the previous patch. (I'd also prefer 'privileged' because in
> this context 'priv' is easily read as 'private')

Sure, will move this to the previous patch.

Thanks & regards
Vivek

[1] https://lore.kernel.org/all/YUoBW13+CvIljUgc@myrica/#t

[snip]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  8:11       ` Vivek Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Gautam @ 2021-10-11  8:11 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	Robin Murphy, Linux ARM

Hi Jean,


On Tue, Sep 21, 2021 at 9:33 PM Jean-Philippe Brucker
<jean-philippe@linaro.org> wrote:
>
> On Fri, Apr 23, 2021 at 03:21:39PM +0530, Vivek Gautam wrote:
> > Redirect the incoming page faults to the registered fault handler
> > that can take the fault information such as, pasid, page request
> > group-id, address and pasid flags.
> >
> > Signed-off-by: Vivek Gautam <vivek.gautam@arm.com>
> > ---
> >  drivers/iommu/virtio-iommu.c      | 80 ++++++++++++++++++++++++++++++-
> >  include/uapi/linux/virtio_iommu.h |  1 +
> >  2 files changed, 80 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> > index c970f386f031..fd237cad1ce5 100644
> > --- a/drivers/iommu/virtio-iommu.c
> > +++ b/drivers/iommu/virtio-iommu.c
> > @@ -37,6 +37,13 @@
> >  /* Some architectures need an Address Space ID for each page table */
> >  DEFINE_XARRAY_ALLOC1(viommu_asid_xa);
> >
> > +struct viommu_dev_pri_work {
> > +     struct work_struct              work;
> > +     struct viommu_dev               *dev;
> > +     struct virtio_iommu_fault       *vfault;
> > +     u32                             endpoint;
> > +};
> > +
> >  struct viommu_dev {
> >       struct iommu_device             iommu;
> >       struct device                   *dev;
> > @@ -49,6 +56,8 @@ struct viommu_dev {
> >       struct list_head                requests;
> >       void                            *evts;
> >       struct list_head                endpoints;
> > +     struct workqueue_struct         *pri_wq;
> > +     struct viommu_dev_pri_work      *pri_work;
>
> IOPF already has a workqueue, so the driver doesn't need one.
> iommu_report_device_fault() should be fast enough to be called from the
> event handler.

Sure, will call iommu_report_device_fault() directly from
viommu_fault_handler().

>
> >
> >       /* Device configuration */
> >       struct iommu_domain_geometry    geometry;
> > @@ -666,6 +675,58 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> >       return ret;
> >  }
> >
> > +static void viommu_handle_ppr(struct work_struct *work)
> > +{
> > +     struct viommu_dev_pri_work *pwork =
> > +                             container_of(work, struct viommu_dev_pri_work, work);
> > +     struct viommu_dev *viommu = pwork->dev;
> > +     struct virtio_iommu_fault *vfault = pwork->vfault;
> > +     struct viommu_endpoint *vdev;
> > +     struct viommu_ep_entry *ep;
> > +     struct iommu_fault_event fault_evt = {
> > +             .fault.type = IOMMU_FAULT_PAGE_REQ,
> > +     };
> > +     struct iommu_fault_page_request *prq = &fault_evt.fault.prm;
> > +
> > +     u32 flags       = le32_to_cpu(vfault->flags);
> > +     u32 prq_flags   = le32_to_cpu(vfault->pr_evt_flags);
> > +     u32 endpoint    = pwork->endpoint;
> > +
> > +     memset(prq, 0, sizeof(struct iommu_fault_page_request));
>
> The fault_evt struct is already initialized

Right, I will remove this line.

>
> > +     prq->addr = le64_to_cpu(vfault->address);
> > +
> > +     if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_LAST_PAGE)
> > +             prq->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
> > +     if (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) {
> > +             prq->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
> > +             prq->pasid = le32_to_cpu(vfault->pasid);
> > +             prq->grpid = le32_to_cpu(vfault->grpid);
> > +     }
> > +
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_READ)
> > +             prq->perm |= IOMMU_FAULT_PERM_READ;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_WRITE)
> > +             prq->perm |= IOMMU_FAULT_PERM_WRITE;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_EXEC)
> > +             prq->perm |= IOMMU_FAULT_PERM_EXEC;
> > +     if (flags & VIRTIO_IOMMU_FAULT_F_PRIV)
> > +             prq->perm |= IOMMU_FAULT_PERM_PRIV;
> > +
> > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > +             if (ep->eid == endpoint) {
> > +                     vdev = ep->vdev;

I have a question here though -
Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
per 'viommu_domain'?
If it is per 'viommu_domain' then the above list is also incorrect.
As you pointed to in the patch [1] -
[PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
by viommu_dev
I am planning to add endpoint ID into a static global xarray in
viommu_probe_device() as below:

        vdev_for_each_id(i, eid, vdev) {
                ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
                if (ret)
                        goto err_free_dev;
        }

and replace the above list traversal as below:

                xa_lock_irqsave(&viommu_ep_ids, flags);
                xa_for_each(&viommu_ep_ids, eid, vdev) {
                        if (eid == endpoint) {
                                ret =
iommu_report_device_fault(vdev->dev, &fault_evt);
                                if (ret)
                                        dev_err(vdev->dev, "Couldn't
handle page request\n");
                        }
                }
                xa_unlock_irqrestore(&viommu_ep_ids, flags);

But using a global xarray would also be incorrect if the endpointsID are global
across 'viommu_domain'.

I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
with the correct device.

> > +                     break;
> > +             }
> > +     }
> > +
> > +     if ((prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_PASID_VALID) &&
> > +         (prq_flags & VIRTIO_IOMMU_FAULT_PRQ_F_NEEDS_PASID))
> > +             prq->flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
> > +
> > +     if (iommu_report_device_fault(vdev->dev, &fault_evt))
> > +             dev_err(vdev->dev, "Couldn't handle page request\n");
>
> An error likely means that nobody registered a fault handler, but we could
> display a few more details about the fault that would help debug the
> endpoint

Sure, will add more debug info to this log.

>
> > +}
> > +
> >  static int viommu_fault_handler(struct viommu_dev *viommu,
> >                               struct virtio_iommu_fault *fault)
> >  {
> > @@ -679,7 +740,13 @@ static int viommu_fault_handler(struct viommu_dev *viommu,
> >       u32 pasid       = le32_to_cpu(fault->pasid);
> >
> >       if (type == VIRTIO_IOMMU_FAULT_F_PAGE_REQ) {
> > -             dev_info(viommu->dev, "Page request fault - unhandled\n");
> > +             dev_info_ratelimited(viommu->dev,
> > +                                  "Page request fault from EP %u\n",
> > +                                  endpoint);
>
> That's rather for debugging the virtio-iommu driver, so should be
> dev_dbg() (or removed entirely)

I will remove this log.

>
> > +
> > +             viommu->pri_work->vfault = fault;
> > +             viommu->pri_work->endpoint = endpoint;
> > +             queue_work(viommu->pri_wq, &viommu->pri_work->work);
> >               return 0;
> >       }
> >
> > @@ -1683,6 +1750,17 @@ static int viommu_probe(struct virtio_device *vdev)
> >               goto err_free_vqs;
> >       }
> >
> > +     viommu->pri_work = kzalloc(sizeof(*viommu->pri_work), GFP_KERNEL);
> > +     if (!viommu->pri_work)
> > +             return -ENOMEM;
> > +
> > +     viommu->pri_work->dev = viommu;
> > +
> > +     INIT_WORK(&viommu->pri_work->work, viommu_handle_ppr);
> > +     viommu->pri_wq = create_singlethread_workqueue("viommu-pri-wq");
> > +     if (!viommu->pri_wq)
> > +             return -ENOMEM;
> > +
> >       viommu->map_flags = VIRTIO_IOMMU_MAP_F_READ | VIRTIO_IOMMU_MAP_F_WRITE;
> >       viommu->last_domain = ~0U;
> >
> > diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> > index accc3318ce46..53aa88e6b077 100644
> > --- a/include/uapi/linux/virtio_iommu.h
> > +++ b/include/uapi/linux/virtio_iommu.h
> > @@ -302,6 +302,7 @@ struct virtio_iommu_req_invalidate {
> >  #define VIRTIO_IOMMU_FAULT_F_READ            (1 << 0)
> >  #define VIRTIO_IOMMU_FAULT_F_WRITE           (1 << 1)
> >  #define VIRTIO_IOMMU_FAULT_F_EXEC            (1 << 2)
> > +#define VIRTIO_IOMMU_FAULT_F_PRIV            (1 << 3)
>
> Should go in the previous patch. (I'd also prefer 'privileged' because in
> this context 'priv' is easily read as 'private')

Sure, will move this to the previous patch.

Thanks & regards
Vivek

[1] https://lore.kernel.org/all/YUoBW13+CvIljUgc@myrica/#t

[snip]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
  2021-10-11  8:11       ` Vivek Gautam
  (?)
  (?)
@ 2021-10-11  9:16         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-11  9:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	Robin Murphy, Linux ARM

Hi Vivek,

On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
> > > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > > +             if (ep->eid == endpoint) {
> > > +                     vdev = ep->vdev;
> 
> I have a question here though -
> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
> per 'viommu_domain'?
> If it is per 'viommu_domain' then the above list is also incorrect.
> As you pointed to in the patch [1] -
> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
> by viommu_dev
> I am planning to add endpoint ID into a static global xarray in
> viommu_probe_device() as below:
> 
>         vdev_for_each_id(i, eid, vdev) {
>                 ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>                 if (ret)
>                         goto err_free_dev;
>         }
> 
> and replace the above list traversal as below:
> 
>                 xa_lock_irqsave(&viommu_ep_ids, flags);
>                 xa_for_each(&viommu_ep_ids, eid, vdev) {
>                         if (eid == endpoint) {
>                                 ret =
> iommu_report_device_fault(vdev->dev, &fault_evt);
>                                 if (ret)
>                                         dev_err(vdev->dev, "Couldn't
> handle page request\n");
>                         }
>                 }
>                 xa_unlock_irqrestore(&viommu_ep_ids, flags);
> 
> But using a global xarray would also be incorrect if the endpointsID are global
> across 'viommu_domain'.
> 
> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
> with the correct device.

The endpoint IDs are only unique across viommu_dev, so a global xarray
wouldn't work but one in viommu_dev would. In vdomain it doesn't work
either because we can't get to the domain from the fault handler without
first finding the endpoint

Thanks,
Jean


^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  9:16         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-11  9:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	Robin Murphy, Linux ARM

Hi Vivek,

On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
> > > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > > +             if (ep->eid == endpoint) {
> > > +                     vdev = ep->vdev;
> 
> I have a question here though -
> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
> per 'viommu_domain'?
> If it is per 'viommu_domain' then the above list is also incorrect.
> As you pointed to in the patch [1] -
> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
> by viommu_dev
> I am planning to add endpoint ID into a static global xarray in
> viommu_probe_device() as below:
> 
>         vdev_for_each_id(i, eid, vdev) {
>                 ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>                 if (ret)
>                         goto err_free_dev;
>         }
> 
> and replace the above list traversal as below:
> 
>                 xa_lock_irqsave(&viommu_ep_ids, flags);
>                 xa_for_each(&viommu_ep_ids, eid, vdev) {
>                         if (eid == endpoint) {
>                                 ret =
> iommu_report_device_fault(vdev->dev, &fault_evt);
>                                 if (ret)
>                                         dev_err(vdev->dev, "Couldn't
> handle page request\n");
>                         }
>                 }
>                 xa_unlock_irqrestore(&viommu_ep_ids, flags);
> 
> But using a global xarray would also be incorrect if the endpointsID are global
> across 'viommu_domain'.
> 
> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
> with the correct device.

The endpoint IDs are only unique across viommu_dev, so a global xarray
wouldn't work but one in viommu_dev would. In vdomain it doesn't work
either because we can't get to the domain from the fault handler without
first finding the endpoint

Thanks,
Jean

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  9:16         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-11  9:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	Robin Murphy, Linux ARM

Hi Vivek,

On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
> > > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > > +             if (ep->eid == endpoint) {
> > > +                     vdev = ep->vdev;
> 
> I have a question here though -
> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
> per 'viommu_domain'?
> If it is per 'viommu_domain' then the above list is also incorrect.
> As you pointed to in the patch [1] -
> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
> by viommu_dev
> I am planning to add endpoint ID into a static global xarray in
> viommu_probe_device() as below:
> 
>         vdev_for_each_id(i, eid, vdev) {
>                 ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>                 if (ret)
>                         goto err_free_dev;
>         }
> 
> and replace the above list traversal as below:
> 
>                 xa_lock_irqsave(&viommu_ep_ids, flags);
>                 xa_for_each(&viommu_ep_ids, eid, vdev) {
>                         if (eid == endpoint) {
>                                 ret =
> iommu_report_device_fault(vdev->dev, &fault_evt);
>                                 if (ret)
>                                         dev_err(vdev->dev, "Couldn't
> handle page request\n");
>                         }
>                 }
>                 xa_unlock_irqrestore(&viommu_ep_ids, flags);
> 
> But using a global xarray would also be incorrect if the endpointsID are global
> across 'viommu_domain'.
> 
> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
> with the correct device.

The endpoint IDs are only unique across viommu_dev, so a global xarray
wouldn't work but one in viommu_dev would. In vdomain it doesn't work
either because we can't get to the domain from the fault handler without
first finding the endpoint

Thanks,
Jean

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  9:16         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 97+ messages in thread
From: Jean-Philippe Brucker @ 2021-10-11  9:16 UTC (permalink / raw)
  To: Vivek Gautam
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	Robin Murphy, Linux ARM

Hi Vivek,

On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
> > > +     list_for_each_entry(ep, &viommu->endpoints, list) {
> > > +             if (ep->eid == endpoint) {
> > > +                     vdev = ep->vdev;
> 
> I have a question here though -
> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
> per 'viommu_domain'?
> If it is per 'viommu_domain' then the above list is also incorrect.
> As you pointed to in the patch [1] -
> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
> by viommu_dev
> I am planning to add endpoint ID into a static global xarray in
> viommu_probe_device() as below:
> 
>         vdev_for_each_id(i, eid, vdev) {
>                 ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>                 if (ret)
>                         goto err_free_dev;
>         }
> 
> and replace the above list traversal as below:
> 
>                 xa_lock_irqsave(&viommu_ep_ids, flags);
>                 xa_for_each(&viommu_ep_ids, eid, vdev) {
>                         if (eid == endpoint) {
>                                 ret =
> iommu_report_device_fault(vdev->dev, &fault_evt);
>                                 if (ret)
>                                         dev_err(vdev->dev, "Couldn't
> handle page request\n");
>                         }
>                 }
>                 xa_unlock_irqrestore(&viommu_ep_ids, flags);
> 
> But using a global xarray would also be incorrect if the endpointsID are global
> across 'viommu_domain'.
> 
> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
> with the correct device.

The endpoint IDs are only unique across viommu_dev, so a global xarray
wouldn't work but one in viommu_dev would. In vdomain it doesn't work
either because we can't get to the domain from the fault handler without
first finding the endpoint

Thanks,
Jean


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
  2021-10-11  9:16         ` Jean-Philippe Brucker
  (?)
@ 2021-10-11  9:20           ` Vivek Kumar Gautam
  -1 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-10-11  9:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS, Joerg Roedel, iommu, Robin Murphy,
	Linux ARM

Hi Jean,


On 10/11/21 2:46 PM, Jean-Philippe Brucker wrote:
> Hi Vivek,
> 
> On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
>>>> +     list_for_each_entry(ep, &viommu->endpoints, list) {
>>>> +             if (ep->eid == endpoint) {
>>>> +                     vdev = ep->vdev;
>>
>> I have a question here though -
>> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
>> per 'viommu_domain'?
>> If it is per 'viommu_domain' then the above list is also incorrect.
>> As you pointed to in the patch [1] -
>> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
>> by viommu_dev
>> I am planning to add endpoint ID into a static global xarray in
>> viommu_probe_device() as below:
>>
>>          vdev_for_each_id(i, eid, vdev) {
>>                  ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>>                  if (ret)
>>                          goto err_free_dev;
>>          }
>>
>> and replace the above list traversal as below:
>>
>>                  xa_lock_irqsave(&viommu_ep_ids, flags);
>>                  xa_for_each(&viommu_ep_ids, eid, vdev) {
>>                          if (eid == endpoint) {
>>                                  ret =
>> iommu_report_device_fault(vdev->dev, &fault_evt);
>>                                  if (ret)
>>                                          dev_err(vdev->dev, "Couldn't
>> handle page request\n");
>>                          }
>>                  }
>>                  xa_unlock_irqrestore(&viommu_ep_ids, flags);
>>
>> But using a global xarray would also be incorrect if the endpointsID are global
>> across 'viommu_domain'.
>>
>> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
>> with the correct device.
> 
> The endpoint IDs are only unique across viommu_dev, so a global xarray
> wouldn't work but one in viommu_dev would. In vdomain it doesn't work
> either because we can't get to the domain from the fault handler without
> first finding the endpoint

Thanks. That's easy then. Will have a xarray in viommu_dev and iterate 
over it from the fault handler.

Best regards
Vivek

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  9:20           ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-10-11  9:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization, iommu,
	Robin Murphy, Linux ARM

Hi Jean,


On 10/11/21 2:46 PM, Jean-Philippe Brucker wrote:
> Hi Vivek,
> 
> On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
>>>> +     list_for_each_entry(ep, &viommu->endpoints, list) {
>>>> +             if (ep->eid == endpoint) {
>>>> +                     vdev = ep->vdev;
>>
>> I have a question here though -
>> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
>> per 'viommu_domain'?
>> If it is per 'viommu_domain' then the above list is also incorrect.
>> As you pointed to in the patch [1] -
>> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
>> by viommu_dev
>> I am planning to add endpoint ID into a static global xarray in
>> viommu_probe_device() as below:
>>
>>          vdev_for_each_id(i, eid, vdev) {
>>                  ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>>                  if (ret)
>>                          goto err_free_dev;
>>          }
>>
>> and replace the above list traversal as below:
>>
>>                  xa_lock_irqsave(&viommu_ep_ids, flags);
>>                  xa_for_each(&viommu_ep_ids, eid, vdev) {
>>                          if (eid == endpoint) {
>>                                  ret =
>> iommu_report_device_fault(vdev->dev, &fault_evt);
>>                                  if (ret)
>>                                          dev_err(vdev->dev, "Couldn't
>> handle page request\n");
>>                          }
>>                  }
>>                  xa_unlock_irqrestore(&viommu_ep_ids, flags);
>>
>> But using a global xarray would also be incorrect if the endpointsID are global
>> across 'viommu_domain'.
>>
>> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
>> with the correct device.
> 
> The endpoint IDs are only unique across viommu_dev, so a global xarray
> wouldn't work but one in viommu_dev would. In vdomain it doesn't work
> either because we can't get to the domain from the fault handler without
> first finding the endpoint

Thanks. That's easy then. Will have a xarray in viommu_dev and iterate 
over it from the fault handler.

Best regards
Vivek
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 97+ messages in thread

* Re: [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults
@ 2021-10-11  9:20           ` Vivek Kumar Gautam
  0 siblings, 0 replies; 97+ messages in thread
From: Vivek Kumar Gautam @ 2021-10-11  9:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Tian, Kevin, mst, Will Deacon, open list, virtualization,
	list@263.net:IOMMU DRIVERS, Joerg Roedel, iommu, Robin Murphy,
	Linux ARM

Hi Jean,


On 10/11/21 2:46 PM, Jean-Philippe Brucker wrote:
> Hi Vivek,
> 
> On Mon, Oct 11, 2021 at 01:41:15PM +0530, Vivek Gautam wrote:
>>>> +     list_for_each_entry(ep, &viommu->endpoints, list) {
>>>> +             if (ep->eid == endpoint) {
>>>> +                     vdev = ep->vdev;
>>
>> I have a question here though -
>> Is endpoint-ID unique across all the endpoints available per 'viommu_dev' or
>> per 'viommu_domain'?
>> If it is per 'viommu_domain' then the above list is also incorrect.
>> As you pointed to in the patch [1] -
>> [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served
>> by viommu_dev
>> I am planning to add endpoint ID into a static global xarray in
>> viommu_probe_device() as below:
>>
>>          vdev_for_each_id(i, eid, vdev) {
>>                  ret = xa_insert(&viommu_ep_ids, eid, vdev, GFP_KERNEL);
>>                  if (ret)
>>                          goto err_free_dev;
>>          }
>>
>> and replace the above list traversal as below:
>>
>>                  xa_lock_irqsave(&viommu_ep_ids, flags);
>>                  xa_for_each(&viommu_ep_ids, eid, vdev) {
>>                          if (eid == endpoint) {
>>                                  ret =
>> iommu_report_device_fault(vdev->dev, &fault_evt);
>>                                  if (ret)
>>                                          dev_err(vdev->dev, "Couldn't
>> handle page request\n");
>>                          }
>>                  }
>>                  xa_unlock_irqrestore(&viommu_ep_ids, flags);
>>
>> But using a global xarray would also be incorrect if the endpointsID are global
>> across 'viommu_domain'.
>>
>> I need to find the correct 'viommu_endpoint' to call iommu_report_device_fault()
>> with the correct device.
> 
> The endpoint IDs are only unique across viommu_dev, so a global xarray
> wouldn't work but one in viommu_dev would. In vdomain it doesn't work
> either because we can't get to the domain from the fault handler without
> first finding the endpoint

Thanks. That's easy then. Will have a xarray in viommu_dev and iterate 
over it from the fault handler.

Best regards
Vivek

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 97+ messages in thread

end of thread, other threads:[~2021-10-11  9:23 UTC | newest]

Thread overview: 97+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-23  9:51 [PATCH RFC v1 00/11] iommu/virtio: vSVA support with Arm Vivek Gautam
2021-04-23  9:51 ` Vivek Gautam
2021-04-23  9:51 ` Vivek Gautam
2021-04-23  9:51 ` [PATCH RFC v1 01/11] uapi/virtio-iommu: Add page request grp-id and flags information Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 15:58   ` Jean-Philippe Brucker
2021-09-21 15:58     ` Jean-Philippe Brucker
2021-09-21 15:58     ` Jean-Philippe Brucker
2021-09-21 15:58     ` Jean-Philippe Brucker
2021-09-30  4:56     ` Vivek Kumar Gautam
2021-09-30  4:56       ` Vivek Kumar Gautam
2021-09-30  4:56       ` Vivek Kumar Gautam
2021-09-30  8:49       ` Jean-Philippe Brucker
2021-09-30  8:49         ` Jean-Philippe Brucker
2021-09-30  8:49         ` Jean-Philippe Brucker
2021-09-30  8:49         ` Jean-Philippe Brucker
2021-09-30 10:57         ` Vivek Kumar Gautam
2021-09-30 10:57           ` Vivek Kumar Gautam
2021-09-30 10:57           ` Vivek Kumar Gautam
2021-04-23  9:51 ` [PATCH RFC v1 02/11] iommu/virtio: Maintain a list of endpoints served by viommu_dev Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 15:59   ` Jean-Philippe Brucker
2021-09-21 15:59     ` Jean-Philippe Brucker
2021-09-21 15:59     ` Jean-Philippe Brucker
2021-09-21 15:59     ` Jean-Philippe Brucker
2021-09-30  9:17     ` Vivek Kumar Gautam
2021-09-30  9:17       ` Vivek Kumar Gautam
2021-09-30  9:17       ` Vivek Kumar Gautam
2021-04-23  9:51 ` [PATCH RFC v1 03/11] iommu/virtio: Handle incoming page faults Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 16:03   ` Jean-Philippe Brucker
2021-09-21 16:03     ` Jean-Philippe Brucker
2021-09-21 16:03     ` Jean-Philippe Brucker
2021-09-21 16:03     ` Jean-Philippe Brucker
2021-10-11  8:11     ` Vivek Gautam
2021-10-11  8:11       ` Vivek Gautam
2021-10-11  8:11       ` Vivek Gautam
2021-10-11  9:16       ` Jean-Philippe Brucker
2021-10-11  9:16         ` Jean-Philippe Brucker
2021-10-11  9:16         ` Jean-Philippe Brucker
2021-10-11  9:16         ` Jean-Philippe Brucker
2021-10-11  9:20         ` Vivek Kumar Gautam
2021-10-11  9:20           ` Vivek Kumar Gautam
2021-10-11  9:20           ` Vivek Kumar Gautam
2021-04-23  9:51 ` [PATCH RFC v1 04/11] iommu/virtio: Add a io page fault queue Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51 ` [PATCH RFC v1 05/11] iommu/virtio: Add SVA feature and related enable/disable callbacks Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 16:04   ` Jean-Philippe Brucker
2021-09-21 16:04     ` Jean-Philippe Brucker
2021-09-21 16:04     ` Jean-Philippe Brucker
2021-09-21 16:04     ` Jean-Philippe Brucker
2021-04-23  9:51 ` [PATCH RFC v1 06/11] iommu/pasid-table: Add pasid table ops for shared context management Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51 ` [PATCH RFC v1 07/11] iommu/arm-smmu-v3: Move shared context descriptor code to cd-lib Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51 ` [PATCH RFC v1 08/11] iommu/arm-smmu-v3: Implement shared context alloc and free ops Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 16:07   ` Jean-Philippe Brucker
2021-09-21 16:07     ` Jean-Philippe Brucker
2021-09-21 16:07     ` Jean-Philippe Brucker
2021-09-21 16:07     ` Jean-Philippe Brucker
2021-09-30  9:50     ` Vivek Kumar Gautam
2021-09-30  9:50       ` Vivek Kumar Gautam
2021-09-30  9:50       ` Vivek Kumar Gautam
2021-04-23  9:51 ` [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 16:09   ` Jean-Philippe Brucker
2021-09-21 16:09     ` Jean-Philippe Brucker
2021-09-21 16:09     ` Jean-Philippe Brucker
2021-09-21 16:09     ` Jean-Philippe Brucker
2021-04-23  9:51 ` [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-09-21 16:16   ` Jean-Philippe Brucker
2021-09-21 16:16     ` Jean-Philippe Brucker
2021-09-21 16:16     ` Jean-Philippe Brucker
2021-09-21 16:16     ` Jean-Philippe Brucker
2021-09-30  9:24     ` Vivek Kumar Gautam
2021-09-30  9:24       ` Vivek Kumar Gautam
2021-09-30  9:24       ` Vivek Kumar Gautam
2021-10-06  9:55       ` Jean-Philippe Brucker
2021-10-06  9:55         ` Jean-Philippe Brucker
2021-10-06  9:55         ` Jean-Philippe Brucker
2021-10-06  9:55         ` Jean-Philippe Brucker
2021-04-23  9:51 ` [PATCH RFC v1 11/11] iommu/virtio: Add support " Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam
2021-04-23  9:51   ` Vivek Gautam

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.