linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] Add uacce module for Accelerator
@ 2019-10-16  8:34 Zhangfei Gao
  2019-10-16  8:34 ` [PATCH v6 1/3] uacce: Add documents for uacce Zhangfei Gao
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Zhangfei Gao @ 2019-10-16  8:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, jean-philippe, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang
  Cc: linux-accelerators, linux-kernel, linux-crypto, Zhangfei Gao

Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
provide Shared Virtual Addressing (SVA) between accelerators and processes.
So accelerator can access any data structure of the main cpu.
This differs from the data sharing between cpu and io device, which share
data content rather than address.
Because of unified address, hardware and user space of process can share
the same virtual address in the communication.

Uacce is intended to be used with Jean Philippe Brucker's SVA
patchset[1], which enables IO side page fault and PASID support. 
We have keep verifying with Jean's sva/current [2]
We also keep verifying with Eric's SMMUv3 Nested Stage patch [3]

This series and related zip & qm driver
https://github.com/Linaro/linux-kernel-warpdrive/tree/5.4-rc1-uacce-v6

The library and user application:
https://github.com/Linaro/warpdrive/tree/wdprd-v1-upstream

References:
[1] http://jpbrucker.net/sva/
[2] http://www.linux-arm.org/git?p=linux-jpb.git;a=shortlog;h=refs/heads/sva/current
[3] https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9

Change History:
v6:
Change sys qfrs_size to different file, suggested by Jonathan
Fix crypto daily build issue and based on crypto code base, also 5.4-rc1.

v5: 
Add an example patch using the uacce interface, suggested by Greg
0003-crypto-hisilicon-register-zip-engine-to-uacce.patch

v4:
Based on 5.4-rc1
Considering other driver integrating uacce, 
if uacce not compiled, uacce_register return error and uacce_unregister is empty.
Simplify uacce flag: UACCE_DEV_SVA.
Address Greg's comments: 
Fix state machine, remove potential syslog triggered from user space etc.

v3:
Recommended by Greg, use sturct uacce_device instead of struct uacce,
and use struct *cdev in struct uacce_device, as a result, 
cdev can be released by itself when refcount decreased to 0.
So the two structures are decoupled and self-maintained by themsleves.
Also add dev.release for put_device.

v2:
Address comments from Greg and Jonathan
Modify interface uacce_register
Drop noiommu mode first

v1:
1. Rebase to 5.3-rc1
2. Build on iommu interface
3. Verifying with Jean's sva and Eric's nested mode iommu.
4. User library has developed a lot: support zlib, openssl etc.
5. Move to misc first

RFC3:
https://lkml.org/lkml/2018/11/12/1951

RFC2:
https://lwn.net/Articles/763990/


Background of why Uacce:
Von Neumann processor is not good at general data manipulation.
It is designed for control-bound rather than data-bound application.
The latter need less control path facility and more/specific ALUs.
So there are more and more heterogeneous processors, such as
encryption/decryption accelerators, TPUs, or
EDGE (Explicated Data Graph Execution) processors, introduced to gain
better performance or power efficiency for particular applications
these days.

There are generally two ways to make use of these heterogeneous processors:

The first is to make them co-processors, just like FPU.
This is good for some application but it has its own cons:
It changes the ISA set permanently.
You must save all state elements when the process is switched out.
But most data-bound processors have a huge set of state elements.
It makes the kernel scheduler more complex.

The second is Accelerator.
It is taken as a IO device from the CPU's point of view
(but it need not to be physically). The process, running on CPU,
hold a context of the accelerator and send instructions to it as if
it calls a function or thread running with FPU.
The context is bound with the processor itself.
So the state elements remain in the hardware context until
the context is released.

We believe this is the core feature of an "Accelerator" vs. Co-processor
or other heterogeneous processors.

The intention of Uacce is to provide the basic facility to backup
this scenario. Its first step is to make sure the accelerator and process
can share the same address space. So the accelerator ISA can directly
address any data structure of the main CPU.
This differs from the data sharing between CPU and IO device,
which share data content rather than address.
So it is different comparing to the other DMA libraries.

In the future, we may add more facility to support linking accelerator
library to the main application, or managing the accelerator context as
special thread.
But no matter how, this can be a solid start point for new processor
to be used as an "accelerator" as this is the essential requirement.

Kenneth Lee (2):
  uacce: Add documents for uacce
  uacce: add uacce driver

Zhangfei Gao (1):
  crypto: hisilicon - register zip engine to uacce

 Documentation/ABI/testing/sysfs-driver-uacce |  65 ++
 Documentation/misc-devices/uacce.rst         | 297 ++++++++
 drivers/crypto/hisilicon/qm.c                | 254 ++++++-
 drivers/crypto/hisilicon/qm.h                |  13 +-
 drivers/crypto/hisilicon/zip/zip_main.c      |  39 +-
 drivers/misc/Kconfig                         |   1 +
 drivers/misc/Makefile                        |   1 +
 drivers/misc/uacce/Kconfig                   |  13 +
 drivers/misc/uacce/Makefile                  |   2 +
 drivers/misc/uacce/uacce.c                   | 995 +++++++++++++++++++++++++++
 include/linux/uacce.h                        | 168 +++++
 include/uapi/misc/uacce/qm.h                 |  22 +
 include/uapi/misc/uacce/uacce.h              |  41 ++
 13 files changed, 1875 insertions(+), 36 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce
 create mode 100644 Documentation/misc-devices/uacce.rst
 create mode 100644 drivers/misc/uacce/Kconfig
 create mode 100644 drivers/misc/uacce/Makefile
 create mode 100644 drivers/misc/uacce/uacce.c
 create mode 100644 include/linux/uacce.h
 create mode 100644 include/uapi/misc/uacce/qm.h
 create mode 100644 include/uapi/misc/uacce/uacce.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v6 1/3] uacce: Add documents for uacce
  2019-10-16  8:34 [PATCH v6 0/3] Add uacce module for Accelerator Zhangfei Gao
@ 2019-10-16  8:34 ` Zhangfei Gao
  2019-10-16 18:36   ` Jean-Philippe Brucker
  2019-10-16  8:34 ` [PATCH v6 2/3] uacce: add uacce driver Zhangfei Gao
  2019-10-16  8:34 ` [PATCH v6 3/3] crypto: hisilicon - register zip engine to uacce Zhangfei Gao
  2 siblings, 1 reply; 15+ messages in thread
From: Zhangfei Gao @ 2019-10-16  8:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, jean-philippe, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang
  Cc: linux-accelerators, linux-kernel, linux-crypto, Kenneth Lee,
	Zaibo Xu, Zhangfei Gao

From: Kenneth Lee <liguozhu@hisilicon.com>

Uacce (Unified/User-space-access-intended Accelerator Framework) is
a kernel module targets to provide Shared Virtual Addressing (SVA)
between the accelerator and process.

This patch add document to explain how it works.

Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
Signed-off-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 Documentation/misc-devices/uacce.rst | 297 +++++++++++++++++++++++++++++++++++
 1 file changed, 297 insertions(+)
 create mode 100644 Documentation/misc-devices/uacce.rst

diff --git a/Documentation/misc-devices/uacce.rst b/Documentation/misc-devices/uacce.rst
new file mode 100644
index 0000000..05c1e09
--- /dev/null
+++ b/Documentation/misc-devices/uacce.rst
@@ -0,0 +1,297 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Introduction of Uacce
+=========================
+
+Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
+provide Shared Virtual Addressing (SVA) between accelerators and processes.
+So accelerator can access any data structure of the main cpu.
+This differs from the data sharing between cpu and io device, which share
+data content rather than address.
+Because of the unified address, hardware and user space of process can
+share the same virtual address in the communication.
+Uacce takes the hardware accelerator as a heterogeneous processor, while
+IOMMU share the same CPU page tables and as a result the same translation
+from va to pa.
+
+	 __________________________       __________________________
+	|                          |     |                          |
+	|  User application (CPU)  |     |   Hardware Accelerator   |
+	|__________________________|     |__________________________|
+
+	             |                                 |
+	             | va                              | va
+	             V                                 V
+                 __________                        __________
+                |          |                      |          |
+                |   MMU    |                      |  IOMMU   |
+                |__________|                      |__________|
+		     |                                 |
+	             |                                 |
+	             V pa                              V pa
+		 _______________________________________
+		|                                       |
+		|              Memory                   |
+		|_______________________________________|
+
+
+
+Architecture
+------------
+
+Uacce is the kernel module, taking charge of iommu and address sharing.
+The user drivers and libraries are called WarpDrive.
+
+A virtual concept, queue, is used for the communication. It provides a
+FIFO-like interface. And it maintains a unified address space between the
+application and all involved hardware.
+
+                             ___________________                  ________________
+                            |                   |   user API     |                |
+                            | WarpDrive library | ------------>  |  user driver   |
+                            |___________________|                |________________|
+                                     |                                    |
+                                     |                                    |
+                                     | queue fd                           |
+                                     |                                    |
+                                     |                                    |
+                                     v                                    |
+     ___________________         _________                                |
+    |                   |       |         |                               | mmap memory
+    | Other framework   |       |  uacce  |                               | r/w interface
+    | crypto/nic/others |       |_________|                               |
+    |___________________|                                                 |
+             |                       |                                    |
+             | register              | register                           |
+             |                       |                                    |
+             |                       |                                    |
+             |                _________________       __________          |
+             |               |                 |     |          |         |
+              -------------  |  Device Driver  |     |  IOMMU   |         |
+                             |_________________|     |__________|         |
+                                     |                                    |
+                                     |                                    V
+                                     |                            ___________________
+                                     |                           |                   |
+                                     --------------------------  |  Device(Hardware) |
+                                                                 |___________________|
+
+
+How does it work
+================
+
+Uacce uses mmap and IOMMU to play the trick.
+
+Uacce create a chrdev for every device registered to it. New queue is
+created when user application open the chrdev. The file descriptor is used
+as the user handle of the queue.
+The accelerator device present itself as an Uacce object, which exports as
+chrdev to the user space. The user application communicates with the
+hardware by ioctl (as control path) or share memory (as data path).
+
+The control path to the hardware is via file operation, while data path is
+via mmap space of the queue fd.
+
+The queue file address space:
+
+enum uacce_qfrt {
+	UACCE_QFRT_MMIO = 0,	/* device mmio region */
+	UACCE_QFRT_DKO = 1,	/* device kernel-only region */
+	UACCE_QFRT_DUS = 2,	/* device user share region */
+	UACCE_QFRT_SS = 3,	/* static shared memory (for non-sva devices) */
+	UACCE_QFRT_MAX = 16,
+};
+
+All regions are optional and differ from device type to type. The
+communication protocol is wrapped by the user driver.
+
+The device mmio region is mapped to the hardware mmio space. It is generally
+used for doorbell or other notification to the hardware. It is not fast enough
+as data channel.
+
+The device kernel-only region is necessary only if the device IOMMU has no
+PASID support or it cannot send kernel-only address request. In this case, if
+kernel need to share memory with the device, kernel has to share iova address
+space with the user process via mmap, to prevent iova conflict.
+
+The device user share region is used for share data buffer between user process
+and device. It can be merged into other regions. But a separated region can help
+on device state management. For example, the device can be started when this
+region is mapped.
+
+The static share virtual memory region is used for share data buffer with the
+device and can be shared among queues / devices.
+Its size is set according to the application requirement.
+
+
+The user API
+------------
+
+We adopt a polling style interface in the user space: ::
+
+	int wd_request_queue(struct wd_queue *q);
+	void wd_release_queue(struct wd_queue *q);
+	int wd_send(struct wd_queue *q, void *req);
+	int wd_recv(struct wd_queue *q, void **req);
+	int wd_recv_sync(struct wd_queue *q, void **req);
+	void wd_flush(struct wd_queue *q);
+
+wd_recv_sync() is a wrapper to its non-sync version. It will trap into
+kernel and wait until the queue become available.
+
+If the queue do not support SVA/SVM. The following helper functions
+can be used to create Static Virtual Share Memory: ::
+
+	void *wd_reserve_memory(struct wd_queue *q, size_t size);
+	int wd_share_reserved_memory(struct wd_queue *q,
+				     struct wd_queue *target_q);
+
+The user API is not mandatory. It is simply a suggestion and hint what the
+kernel interface is supposed to be.
+
+
+The user driver
+---------------
+
+The queue file mmap space will need a user driver to wrap the communication
+protocol. Uacce provides some attributes in sysfs for the user driver to
+match the right accelerator accordingly.
+More details in Documentation/ABI/testing/sysfs-driver-uacce.
+
+
+The Uacce register API
+-----------------------
+The register API is defined in uacce.h.
+
+struct uacce_interface {
+	char name[32];
+	unsigned int flags;
+	struct uacce_ops *ops;
+};
+
+According to the IOMMU capability, uacce_interface flags can be:
+
+UACCE_DEV_SVA (0x1)
+	Support shared virtual address
+
+UACCE_DEV_SHARE_DOMAIN (0)
+	This is used for device which does not support pasid.
+
+struct uacce_device *uacce_register(struct device *parent,
+				    struct uacce_interface *interface);
+void uacce_unregister(struct uacce_device *uacce);
+
+uacce_register results can be:
+a. If uacce module is not compiled, ERR_PTR(-ENODEV)
+b. Succeed with the desired flags
+c. Succeed with the negotiated flags, for example
+   uacce_interface.flags = UACCE_DEV_SVA but uacce->flags = ~UACCE_DEV_SVA
+So user driver need check return value as well as the negotiated uacce->flags.
+
+
+The Memory Sharing Model
+------------------------
+The perfect form of a Uacce device is to support SVM/SVA. We built this upon
+Jean Philippe Brucker's SVA patches. [1]
+
+If the hardware support UACCE_DEV_SVA, the user process's page table is
+shared to the opened queue. So the device can access any address in the
+process address space.
+And it can raise a page fault if the physical page is not available yet.
+It can also access the address in the kernel space, which is referred by
+another page table particular to the kernel. Most of IOMMU implementation can
+handle this by a tag on the address request of the device. For example, ARM
+SMMU uses SSV bit to indicate that the address request is for kernel or user
+space.
+Queue file regions can be used:
+UACCE_QFRT_MMIO: device mmio region (map to user)
+UACCE_QFRT_DUS: device user share (map to dev and user)
+
+If the device does not support UACCE_DEV_SVA, Uacce allow only one process at
+the same time. DMA API cannot be used as well, since Uacce will create an
+unmanaged iommu_domain for the device.
+Queue file regions can be used:
+UACCE_QFRT_MMIO: device mmio region (map to user)
+UACCE_QFRT_DKO: device kernel-only (map to dev, no user)
+UACCE_QFRT_DUS: device user share (map to dev and user)
+UACCE_QFRT_SS:  static shared memory (map to devs and user)
+
+
+The Fork Scenario
+=================
+For a process with allocated queues and shared memory, what happen if it forks
+a child?
+
+The fd of the queue will be duplicated on folk, so the child can send request
+to the same queue as its parent. But the requests which is sent from processes
+except for the one who opens the queue will be blocked.
+
+It is recommended to add O_CLOEXEC to the queue file.
+
+The queue mmap space has a VM_DONTCOPY in its VMA. So the child will lose all
+those VMAs.
+
+This is a reason why Uacce does not adopt the mode used in VFIO and
+InfiniBand.  Both solutions can set any user pointer for hardware sharing.
+But they cannot support fork when the dma is in process. Or the
+"Copy-On-Write" procedure will make the parent process lost its physical
+pages.
+
+
+Difference to the VFIO and IB framework
+---------------------------------------
+The essential function of Uacce is to let the device access the user
+address directly. There are many device drivers doing the same in the kernel.
+And both VFIO and IB can provide similar functions in framework level.
+
+But Uacce has a different goal: "share address space". It is
+not taken the request to the accelerator as an enclosure data structure. It
+takes the accelerator as another thread of the same process. So the
+accelerator can refer to any address used by the process.
+
+Both VFIO and IB are taken this as "memory sharing", not "address sharing".
+They care more on sharing the block of memory. But if there is an address
+stored in the block and referring to another memory region. The address may
+not be valid.
+
+By adding more constraints to the VFIO and IB framework, in some sense, we may
+achieve a similar goal. But we gave it up finally. Both VFIO and IB have extra
+assumption which is unnecessary to Uacce. They may hurt each other if we
+try to merge them together.
+
+VFIO manages resource of a hardware as a "virtual device". If a device need to
+serve a separated application. It must isolate the resource as a separate
+virtual device.  And the life cycle of the application and virtual device are
+unnecessary unrelated. And most concepts, such as bus, driver, probe and
+so on, to make it as a "device" is unnecessary either. And the logic added to
+VFIO to make address sharing do no help on "creating a virtual device".
+
+IB creates a "verbs" standard for sharing memory region to another remote
+entity.  Most of these verbs are to make memory region between entities to be
+synchronized.  This is not what accelerator need. Accelerator is in the same
+memory system with the CPU. It refers to the same memory system among CPU and
+devices. So the local memory terms/verbs are good enough for it. Extra "verbs"
+are not necessary. And its queue (like queue pair in IB) is the communication
+channel direct to the accelerator hardware. There is nothing about memory
+itself.
+
+Further, both VFIO and IB use the "pin" (get_user_page) way to lock local
+memory in place.  This is flexible. But it can cause other problems. For
+example, if the user process fork a child process. The COW procedure may make
+the parent process lost its pages which are sharing with the device. These may
+be fixed in the future. But is not going to be easy. (There is a discussion
+about this on Linux Plumbers Conference 2018 [2])
+
+So we choose to build the solution directly on top of IOMMU interface. IOMMU
+is the essential way for device and process to share their page mapping from
+the hardware perspective. It will be safe to create a software solution on
+this assumption.  Uacce manages the IOMMU interface for the accelerator
+device, so the device driver can export some of the resources to the user
+space. Uacce than can make sure the device and the process have the same
+address space.
+
+
+References
+==========
+.. [1] http://jpbrucker.net/sva/
+.. [2] https://lwn.net/Articles/774411/
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 2/3] uacce: add uacce driver
  2019-10-16  8:34 [PATCH v6 0/3] Add uacce module for Accelerator Zhangfei Gao
  2019-10-16  8:34 ` [PATCH v6 1/3] uacce: Add documents for uacce Zhangfei Gao
@ 2019-10-16  8:34 ` Zhangfei Gao
  2019-10-16 17:28   ` Jean-Philippe Brucker
  2019-10-22 18:49   ` Jerome Glisse
  2019-10-16  8:34 ` [PATCH v6 3/3] crypto: hisilicon - register zip engine to uacce Zhangfei Gao
  2 siblings, 2 replies; 15+ messages in thread
From: Zhangfei Gao @ 2019-10-16  8:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, jean-philippe, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang
  Cc: linux-accelerators, linux-kernel, linux-crypto, Kenneth Lee,
	Zaibo Xu, Zhangfei Gao

From: Kenneth Lee <liguozhu@hisilicon.com>

Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
provide Shared Virtual Addressing (SVA) between accelerators and processes.
So accelerator can access any data structure of the main cpu.
This differs from the data sharing between cpu and io device, which share
data content rather than address.
Since unified address, hardware and user space of process can share the
same virtual address in the communication.

Uacce create a chrdev for every registration, the queue is allocated to
the process when the chrdev is opened. Then the process can access the
hardware resource by interact with the queue file. By mmap the queue
file space to user space, the process can directly put requests to the
hardware without syscall to the kernel space.

Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
Signed-off-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 Documentation/ABI/testing/sysfs-driver-uacce |  65 ++
 drivers/misc/Kconfig                         |   1 +
 drivers/misc/Makefile                        |   1 +
 drivers/misc/uacce/Kconfig                   |  13 +
 drivers/misc/uacce/Makefile                  |   2 +
 drivers/misc/uacce/uacce.c                   | 995 +++++++++++++++++++++++++++
 include/linux/uacce.h                        | 168 +++++
 include/uapi/misc/uacce/uacce.h              |  41 ++
 8 files changed, 1286 insertions(+)
 create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce
 create mode 100644 drivers/misc/uacce/Kconfig
 create mode 100644 drivers/misc/uacce/Makefile
 create mode 100644 drivers/misc/uacce/uacce.c
 create mode 100644 include/linux/uacce.h
 create mode 100644 include/uapi/misc/uacce/uacce.h

diff --git a/Documentation/ABI/testing/sysfs-driver-uacce b/Documentation/ABI/testing/sysfs-driver-uacce
new file mode 100644
index 0000000..e48333c
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-driver-uacce
@@ -0,0 +1,65 @@
+What:           /sys/class/uacce/hisi_zip-<n>/id
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Id of the device.
+
+What:           /sys/class/uacce/hisi_zip-<n>/api
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Api of the device, used by application to match the correct driver
+
+What:           /sys/class/uacce/hisi_zip-<n>/flags
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Attributes of the device, see UACCE_DEV_xxx flag defined in uacce.h
+
+What:           /sys/class/uacce/hisi_zip-<n>/available_instances
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Available instances left of the device
+
+What:           /sys/class/uacce/hisi_zip-<n>/algorithms
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Algorithms supported by this accelerator
+
+What:           /sys/class/uacce/hisi_zip-<n>/qfrt_mmio_size
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Page size of mmio region queue file
+
+What:           /sys/class/uacce/hisi_zip-<n>/qfrt_dko_size
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Page size of dko region queue file
+
+What:           /sys/class/uacce/hisi_zip-<n>/qfrt_dus_size
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Page size of dus region queue file
+
+What:           /sys/class/uacce/hisi_zip-<n>/qfrt_ss_size
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Page size of ss region queue file
+
+What:           /sys/class/uacce/hisi_zip-<n>/numa_distance
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Distance of device node to cpu node
+
+What:           /sys/class/uacce/hisi_zip-<n>/node_id
+Date:           Oct 2019
+KernelVersion:  5.5
+Contact:        linux-accelerators@lists.ozlabs.org
+Description:    Id of the numa node
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index c55b637..929feb0 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -481,4 +481,5 @@ source "drivers/misc/cxl/Kconfig"
 source "drivers/misc/ocxl/Kconfig"
 source "drivers/misc/cardreader/Kconfig"
 source "drivers/misc/habanalabs/Kconfig"
+source "drivers/misc/uacce/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index c1860d3..9abf292 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -56,4 +56,5 @@ obj-$(CONFIG_OCXL)		+= ocxl/
 obj-y				+= cardreader/
 obj-$(CONFIG_PVPANIC)   	+= pvpanic.o
 obj-$(CONFIG_HABANA_AI)		+= habanalabs/
+obj-$(CONFIG_UACCE)		+= uacce/
 obj-$(CONFIG_XILINX_SDFEC)	+= xilinx_sdfec.o
diff --git a/drivers/misc/uacce/Kconfig b/drivers/misc/uacce/Kconfig
new file mode 100644
index 0000000..e854354
--- /dev/null
+++ b/drivers/misc/uacce/Kconfig
@@ -0,0 +1,13 @@
+config UACCE
+	tristate "Accelerator Framework for User Land"
+	depends on IOMMU_API
+	help
+	  UACCE provides interface for the user process to access the hardware
+	  without interaction with the kernel space in data path.
+
+	  The user-space interface is described in
+	  include/uapi/misc/uacce.h
+
+	  See Documentation/misc-devices/uacce.rst for more details.
+
+	  If you don't know what to do here, say N.
diff --git a/drivers/misc/uacce/Makefile b/drivers/misc/uacce/Makefile
new file mode 100644
index 0000000..5b4374e
--- /dev/null
+++ b/drivers/misc/uacce/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+obj-$(CONFIG_UACCE) += uacce.o
diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
new file mode 100644
index 0000000..534ddc3
--- /dev/null
+++ b/drivers/misc/uacce/uacce.c
@@ -0,0 +1,995 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/compat.h>
+#include <linux/dma-iommu.h>
+#include <linux/file.h>
+#include <linux/irqdomain.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/sched/signal.h>
+#include <linux/uacce.h>
+
+static struct class *uacce_class;
+static DEFINE_IDR(uacce_idr);
+static dev_t uacce_devt;
+static DEFINE_MUTEX(uacce_mutex);
+static const struct file_operations uacce_fops;
+
+static int uacce_queue_map_qfr(struct uacce_queue *q,
+			       struct uacce_qfile_region *qfr)
+{
+	struct device *dev = q->uacce->pdev;
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	int i, j, ret;
+
+	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
+		return 0;
+
+	if (!domain)
+		return -ENODEV;
+
+	for (i = 0; i < qfr->nr_pages; i++) {
+		ret = iommu_map(domain, qfr->iova + i * PAGE_SIZE,
+				page_to_phys(qfr->pages[i]),
+				PAGE_SIZE, qfr->prot | q->uacce->prot);
+		if (ret)
+			goto err_with_map_pages;
+
+		get_page(qfr->pages[i]);
+	}
+
+	return 0;
+
+err_with_map_pages:
+	for (j = i - 1; j >= 0; j--) {
+		iommu_unmap(domain, qfr->iova + j * PAGE_SIZE, PAGE_SIZE);
+		put_page(qfr->pages[j]);
+	}
+	return ret;
+}
+
+static void uacce_queue_unmap_qfr(struct uacce_queue *q,
+				  struct uacce_qfile_region *qfr)
+{
+	struct device *dev = q->uacce->pdev;
+	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	int i;
+
+	if (!domain || !qfr)
+		return;
+
+	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
+		return;
+
+	for (i = qfr->nr_pages - 1; i >= 0; i--) {
+		iommu_unmap(domain, qfr->iova + i * PAGE_SIZE, PAGE_SIZE);
+		put_page(qfr->pages[i]);
+	}
+}
+
+static int uacce_qfr_alloc_pages(struct uacce_qfile_region *qfr)
+{
+	int i, j;
+
+	qfr->pages = kcalloc(qfr->nr_pages, sizeof(*qfr->pages), GFP_ATOMIC);
+	if (!qfr->pages)
+		return -ENOMEM;
+
+	for (i = 0; i < qfr->nr_pages; i++) {
+		qfr->pages[i] = alloc_page(GFP_ATOMIC | __GFP_ZERO);
+		if (!qfr->pages[i])
+			goto err_with_pages;
+	}
+
+	return 0;
+
+err_with_pages:
+	for (j = i - 1; j >= 0; j--)
+		put_page(qfr->pages[j]);
+
+	kfree(qfr->pages);
+	return -ENOMEM;
+}
+
+static void uacce_qfr_free_pages(struct uacce_qfile_region *qfr)
+{
+	int i;
+
+	for (i = 0; i < qfr->nr_pages; i++)
+		put_page(qfr->pages[i]);
+
+	kfree(qfr->pages);
+}
+
+static inline int uacce_queue_mmap_qfr(struct uacce_queue *q,
+				       struct uacce_qfile_region *qfr,
+				       struct vm_area_struct *vma)
+{
+	int i, ret;
+
+	for (i = 0; i < qfr->nr_pages; i++) {
+		ret = remap_pfn_range(vma, vma->vm_start + (i << PAGE_SHIFT),
+				      page_to_pfn(qfr->pages[i]), PAGE_SIZE,
+				      vma->vm_page_prot);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static struct uacce_qfile_region *
+uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma,
+		    enum uacce_qfrt type, unsigned int flags)
+{
+	struct uacce_qfile_region *qfr;
+	struct uacce_device *uacce = q->uacce;
+	unsigned long vm_pgoff;
+	int ret = -ENOMEM;
+
+	qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC);
+	if (!qfr)
+		return ERR_PTR(-ENOMEM);
+
+	qfr->type = type;
+	qfr->flags = flags;
+	qfr->iova = vma->vm_start;
+	qfr->nr_pages = vma_pages(vma);
+
+	if (vma->vm_flags & VM_READ)
+		qfr->prot |= IOMMU_READ;
+
+	if (vma->vm_flags & VM_WRITE)
+		qfr->prot |= IOMMU_WRITE;
+
+	if (flags & UACCE_QFRF_SELFMT) {
+		if (!uacce->ops->mmap) {
+			ret = -EINVAL;
+			goto err_with_qfr;
+		}
+
+		ret = uacce->ops->mmap(q, vma, qfr);
+		if (ret)
+			goto err_with_qfr;
+		return qfr;
+	}
+
+	/* allocate memory */
+	if (flags & UACCE_QFRF_DMA) {
+		qfr->kaddr = dma_alloc_coherent(uacce->pdev,
+						qfr->nr_pages << PAGE_SHIFT,
+						&qfr->dma, GFP_KERNEL);
+		if (!qfr->kaddr) {
+			ret = -ENOMEM;
+			goto err_with_qfr;
+		}
+	} else {
+		ret = uacce_qfr_alloc_pages(qfr);
+		if (ret)
+			goto err_with_qfr;
+	}
+
+	/* map to device */
+	ret = uacce_queue_map_qfr(q, qfr);
+	if (ret)
+		goto err_with_pages;
+
+	/* mmap to user space */
+	if (flags & UACCE_QFRF_MMAP) {
+		if (flags & UACCE_QFRF_DMA) {
+			/* dma_mmap_coherent() requires vm_pgoff as 0
+			 * restore vm_pfoff to initial value for mmap()
+			 */
+			vm_pgoff = vma->vm_pgoff;
+			vma->vm_pgoff = 0;
+			ret = dma_mmap_coherent(uacce->pdev, vma, qfr->kaddr,
+						qfr->dma,
+						qfr->nr_pages << PAGE_SHIFT);
+			vma->vm_pgoff = vm_pgoff;
+		} else {
+			ret = uacce_queue_mmap_qfr(q, qfr, vma);
+		}
+
+		if (ret)
+			goto err_with_mapped_qfr;
+	}
+
+	return qfr;
+
+err_with_mapped_qfr:
+	uacce_queue_unmap_qfr(q, qfr);
+err_with_pages:
+	if (flags & UACCE_QFRF_DMA)
+		dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
+				  qfr->kaddr, qfr->dma);
+	else
+		uacce_qfr_free_pages(qfr);
+err_with_qfr:
+	kfree(qfr);
+
+	return ERR_PTR(ret);
+}
+
+static void uacce_destroy_region(struct uacce_queue *q,
+				 struct uacce_qfile_region *qfr)
+{
+	struct uacce_device *uacce = q->uacce;
+
+	if (qfr->flags & UACCE_QFRF_DMA) {
+		dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
+				  qfr->kaddr, qfr->dma);
+	} else if (qfr->pages) {
+		if (qfr->flags & UACCE_QFRF_KMAP && qfr->kaddr) {
+			vunmap(qfr->kaddr);
+			qfr->kaddr = NULL;
+		}
+
+		uacce_qfr_free_pages(qfr);
+	}
+	kfree(qfr);
+}
+
+static long uacce_cmd_share_qfr(struct uacce_queue *tgt, int fd)
+{
+	struct file *filep;
+	struct uacce_queue *src;
+	int ret = -EINVAL;
+
+	mutex_lock(&uacce_mutex);
+
+	if (tgt->state != UACCE_Q_STARTED)
+		goto out_with_lock;
+
+	filep = fget(fd);
+	if (!filep)
+		goto out_with_lock;
+
+	if (filep->f_op != &uacce_fops)
+		goto out_with_fd;
+
+	src = filep->private_data;
+	if (!src)
+		goto out_with_fd;
+
+	if (tgt->uacce->flags & UACCE_DEV_SVA)
+		goto out_with_fd;
+
+	if (!src->qfrs[UACCE_QFRT_SS] || tgt->qfrs[UACCE_QFRT_SS])
+		goto out_with_fd;
+
+	ret = uacce_queue_map_qfr(tgt, src->qfrs[UACCE_QFRT_SS]);
+	if (ret)
+		goto out_with_fd;
+
+	tgt->qfrs[UACCE_QFRT_SS] = src->qfrs[UACCE_QFRT_SS];
+	list_add(&tgt->list, &src->qfrs[UACCE_QFRT_SS]->qs);
+
+out_with_fd:
+	fput(filep);
+out_with_lock:
+	mutex_unlock(&uacce_mutex);
+	return ret;
+}
+
+static int uacce_start_queue(struct uacce_queue *q)
+{
+	struct uacce_qfile_region *qfr;
+	int ret = -EINVAL;
+	int i, j;
+
+	mutex_lock(&uacce_mutex);
+
+	if (q->state != UACCE_Q_INIT)
+		goto out_with_lock;
+
+	/*
+	 * map KMAP qfr to kernel
+	 * vmap should be done in non-spinlocked context!
+	 */
+	for (i = 0; i < UACCE_QFRT_MAX; i++) {
+		qfr = q->qfrs[i];
+		if (qfr && (qfr->flags & UACCE_QFRF_KMAP) && !qfr->kaddr) {
+			qfr->kaddr = vmap(qfr->pages, qfr->nr_pages, VM_MAP,
+					  PAGE_KERNEL);
+			if (!qfr->kaddr) {
+				ret = -ENOMEM;
+				goto err_with_vmap;
+			}
+		}
+	}
+
+	if (q->uacce->ops->start_queue) {
+		ret = q->uacce->ops->start_queue(q);
+		if (ret < 0)
+			goto err_with_vmap;
+	}
+
+	q->state = UACCE_Q_STARTED;
+	mutex_unlock(&uacce_mutex);
+
+	return 0;
+
+err_with_vmap:
+	for (j = i; j >= 0; j--) {
+		qfr = q->qfrs[j];
+		if (qfr && qfr->kaddr) {
+			vunmap(qfr->kaddr);
+			qfr->kaddr = NULL;
+		}
+	}
+out_with_lock:
+	mutex_unlock(&uacce_mutex);
+	return ret;
+}
+
+static long uacce_put_queue(struct uacce_queue *q)
+{
+	struct uacce_device *uacce = q->uacce;
+
+	mutex_lock(&uacce_mutex);
+
+	if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
+		uacce->ops->stop_queue(q);
+
+	if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) &&
+	     uacce->ops->put_queue)
+		uacce->ops->put_queue(q);
+
+	q->state = UACCE_Q_ZOMBIE;
+	mutex_unlock(&uacce_mutex);
+
+	return 0;
+}
+
+static long uacce_fops_unl_ioctl(struct file *filep,
+				 unsigned int cmd, unsigned long arg)
+{
+	struct uacce_queue *q = filep->private_data;
+	struct uacce_device *uacce = q->uacce;
+
+	switch (cmd) {
+	case UACCE_CMD_SHARE_SVAS:
+		return uacce_cmd_share_qfr(q, arg);
+
+	case UACCE_CMD_START:
+		return uacce_start_queue(q);
+
+	case UACCE_CMD_PUT_Q:
+		return uacce_put_queue(q);
+
+	default:
+		if (!uacce->ops->ioctl)
+			return -EINVAL;
+
+		return uacce->ops->ioctl(q, cmd, arg);
+	}
+}
+
+#ifdef CONFIG_COMPAT
+static long uacce_fops_compat_ioctl(struct file *filep,
+				   unsigned int cmd, unsigned long arg)
+{
+	arg = (unsigned long)compat_ptr(arg);
+
+	return uacce_fops_unl_ioctl(filep, cmd, arg);
+}
+#endif
+
+static int uacce_dev_open_check(struct uacce_device *uacce)
+{
+	if (uacce->flags & UACCE_DEV_SVA)
+		return 0;
+
+	/*
+	 * The device can be opened once if it does not support pasid
+	 */
+	if (kref_read(&uacce->cdev->kobj.kref) > 2)
+		return -EBUSY;
+
+	return 0;
+}
+
+static int uacce_fops_open(struct inode *inode, struct file *filep)
+{
+	struct uacce_queue *q;
+	struct iommu_sva *handle = NULL;
+	struct uacce_device *uacce;
+	int ret;
+	int pasid = 0;
+
+	uacce = idr_find(&uacce_idr, iminor(inode));
+	if (!uacce)
+		return -ENODEV;
+
+	if (!try_module_get(uacce->pdev->driver->owner))
+		return -ENODEV;
+
+	ret = uacce_dev_open_check(uacce);
+	if (ret)
+		goto out_with_module;
+
+	if (uacce->flags & UACCE_DEV_SVA) {
+		handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL);
+		if (IS_ERR(handle))
+			goto out_with_module;
+		pasid = iommu_sva_get_pasid(handle);
+	}
+
+	q = kzalloc(sizeof(struct uacce_queue), GFP_KERNEL);
+	if (!q) {
+		ret = -ENOMEM;
+		goto out_with_module;
+	}
+
+	if (uacce->ops->get_queue) {
+		ret = uacce->ops->get_queue(uacce, pasid, q);
+		if (ret < 0)
+			goto out_with_mem;
+	}
+
+	q->pasid = pasid;
+	q->handle = handle;
+	q->uacce = uacce;
+	q->mm = current->mm;
+	memset(q->qfrs, 0, sizeof(q->qfrs));
+	INIT_LIST_HEAD(&q->list);
+	init_waitqueue_head(&q->wait);
+	filep->private_data = q;
+	q->state = UACCE_Q_INIT;
+
+	return 0;
+
+out_with_mem:
+	kfree(q);
+out_with_module:
+	module_put(uacce->pdev->driver->owner);
+	return ret;
+}
+
+static int uacce_fops_release(struct inode *inode, struct file *filep)
+{
+	struct uacce_queue *q = filep->private_data;
+	struct uacce_qfile_region *qfr;
+	struct uacce_device *uacce = q->uacce;
+	bool is_to_free_region;
+	int free_pages = 0;
+	int i;
+
+	mutex_lock(&uacce_mutex);
+
+	if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
+		uacce->ops->stop_queue(q);
+
+	for (i = 0; i < UACCE_QFRT_MAX; i++) {
+		qfr = q->qfrs[i];
+		if (!qfr)
+			continue;
+
+		is_to_free_region = false;
+		uacce_queue_unmap_qfr(q, qfr);
+		if (i == UACCE_QFRT_SS) {
+			list_del(&q->list);
+			if (list_empty(&qfr->qs))
+				is_to_free_region = true;
+		} else
+			is_to_free_region = true;
+
+		if (is_to_free_region) {
+			free_pages += qfr->nr_pages;
+			uacce_destroy_region(q, qfr);
+		}
+
+		qfr = NULL;
+	}
+
+	if (current->mm == q->mm) {
+		down_write(&q->mm->mmap_sem);
+		q->mm->data_vm -= free_pages;
+		up_write(&q->mm->mmap_sem);
+	}
+
+	if (uacce->flags & UACCE_DEV_SVA)
+		iommu_sva_unbind_device(q->handle);
+
+	if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) &&
+	     uacce->ops->put_queue)
+		uacce->ops->put_queue(q);
+
+	kfree(q);
+	mutex_unlock(&uacce_mutex);
+
+	module_put(uacce->pdev->driver->owner);
+
+	return 0;
+}
+
+static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+{
+	struct uacce_queue *q = filep->private_data;
+	struct uacce_device *uacce = q->uacce;
+	struct uacce_qfile_region *qfr;
+	enum uacce_qfrt type = 0;
+	unsigned int flags = 0;
+	int ret;
+
+	if (vma->vm_pgoff < UACCE_QFRT_MAX)
+		type = vma->vm_pgoff;
+
+	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
+
+	mutex_lock(&uacce_mutex);
+
+	/* fixme: if the region need no pages, we don't need to check it */
+	if (q->mm->data_vm + vma_pages(vma) >
+	    rlimit(RLIMIT_DATA) >> PAGE_SHIFT) {
+		ret = -ENOMEM;
+		goto out_with_lock;
+	}
+
+	if (q->qfrs[type]) {
+		ret = -EBUSY;
+		goto out_with_lock;
+	}
+
+	switch (type) {
+	case UACCE_QFRT_MMIO:
+		flags = UACCE_QFRF_SELFMT;
+		break;
+
+	case UACCE_QFRT_SS:
+		if (q->state != UACCE_Q_STARTED) {
+			ret = -EINVAL;
+			goto out_with_lock;
+		}
+
+		if (uacce->flags & UACCE_DEV_SVA) {
+			ret = -EINVAL;
+			goto out_with_lock;
+		}
+
+		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
+
+		break;
+
+	case UACCE_QFRT_DKO:
+		if (uacce->flags & UACCE_DEV_SVA) {
+			ret = -EINVAL;
+			goto out_with_lock;
+		}
+
+		flags = UACCE_QFRF_MAP | UACCE_QFRF_KMAP;
+
+		break;
+
+	case UACCE_QFRT_DUS:
+		if (uacce->flags & UACCE_DEV_SVA) {
+			flags = UACCE_QFRF_SELFMT;
+			break;
+		}
+
+		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
+		break;
+
+	default:
+		WARN_ON(&uacce->dev);
+		break;
+	}
+
+	qfr = uacce_create_region(q, vma, type, flags);
+	if (IS_ERR(qfr)) {
+		ret = PTR_ERR(qfr);
+		goto out_with_lock;
+	}
+	q->qfrs[type] = qfr;
+
+	if (type == UACCE_QFRT_SS) {
+		INIT_LIST_HEAD(&qfr->qs);
+		list_add(&q->list, &q->qfrs[type]->qs);
+	}
+
+	mutex_unlock(&uacce_mutex);
+
+	if (qfr->pages)
+		q->mm->data_vm += qfr->nr_pages;
+
+	return 0;
+
+out_with_lock:
+	mutex_unlock(&uacce_mutex);
+	return ret;
+}
+
+static __poll_t uacce_fops_poll(struct file *file, poll_table *wait)
+{
+	struct uacce_queue *q = file->private_data;
+	struct uacce_device *uacce = q->uacce;
+
+	poll_wait(file, &q->wait, wait);
+	if (uacce->ops->is_q_updated && uacce->ops->is_q_updated(q))
+		return EPOLLIN | EPOLLRDNORM;
+
+	return 0;
+}
+
+static const struct file_operations uacce_fops = {
+	.owner		= THIS_MODULE,
+	.open		= uacce_fops_open,
+	.release	= uacce_fops_release,
+	.unlocked_ioctl	= uacce_fops_unl_ioctl,
+#ifdef CONFIG_COMPAT
+	.compat_ioctl	= uacce_fops_compat_ioctl,
+#endif
+	.mmap		= uacce_fops_mmap,
+	.poll		= uacce_fops_poll,
+};
+
+#define to_uacce_device(dev) container_of(dev, struct uacce_device, dev)
+
+static ssize_t id_show(struct device *dev,
+		       struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%d\n", uacce->dev_id);
+}
+
+static ssize_t api_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%s\n", uacce->api_ver);
+}
+
+static ssize_t numa_distance_show(struct device *dev,
+				  struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+	int distance;
+
+	distance = node_distance(smp_processor_id(), uacce->pdev->numa_node);
+
+	return sprintf(buf, "%d\n", abs(distance));
+}
+
+static ssize_t node_id_show(struct device *dev,
+			    struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+	int node_id;
+
+	node_id = dev_to_node(uacce->pdev);
+
+	return sprintf(buf, "%d\n", node_id);
+}
+
+static ssize_t flags_show(struct device *dev,
+			  struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%u\n", uacce->flags);
+}
+
+static ssize_t available_instances_show(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+	int val = 0;
+
+	if (uacce->ops->get_available_instances)
+		val = uacce->ops->get_available_instances(uacce);
+
+	return sprintf(buf, "%d\n", val);
+}
+
+static ssize_t algorithms_show(struct device *dev,
+			       struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%s", uacce->algs);
+}
+
+static ssize_t qfrt_mmio_size_show(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%lu\n",
+		       uacce->qf_pg_size[UACCE_QFRT_MMIO] << PAGE_SHIFT);
+}
+
+static ssize_t qfrt_dko_size_show(struct device *dev,
+				  struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%lu\n",
+		       uacce->qf_pg_size[UACCE_QFRT_DKO] << PAGE_SHIFT);
+}
+
+static ssize_t qfrt_dus_size_show(struct device *dev,
+				  struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%lu\n",
+		       uacce->qf_pg_size[UACCE_QFRT_DUS] << PAGE_SHIFT);
+}
+
+static ssize_t qfrt_ss_size_show(struct device *dev,
+				 struct device_attribute *attr, char *buf)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	return sprintf(buf, "%lu\n",
+		       uacce->qf_pg_size[UACCE_QFRT_SS] << PAGE_SHIFT);
+}
+
+static DEVICE_ATTR_RO(id);
+static DEVICE_ATTR_RO(api);
+static DEVICE_ATTR_RO(numa_distance);
+static DEVICE_ATTR_RO(node_id);
+static DEVICE_ATTR_RO(flags);
+static DEVICE_ATTR_RO(available_instances);
+static DEVICE_ATTR_RO(algorithms);
+static DEVICE_ATTR_RO(qfrt_mmio_size);
+static DEVICE_ATTR_RO(qfrt_dko_size);
+static DEVICE_ATTR_RO(qfrt_dus_size);
+static DEVICE_ATTR_RO(qfrt_ss_size);
+
+static struct attribute *uacce_dev_attrs[] = {
+	&dev_attr_id.attr,
+	&dev_attr_api.attr,
+	&dev_attr_node_id.attr,
+	&dev_attr_numa_distance.attr,
+	&dev_attr_flags.attr,
+	&dev_attr_available_instances.attr,
+	&dev_attr_algorithms.attr,
+	&dev_attr_qfrt_mmio_size.attr,
+	&dev_attr_qfrt_dko_size.attr,
+	&dev_attr_qfrt_dus_size.attr,
+	&dev_attr_qfrt_ss_size.attr,
+	NULL,
+};
+ATTRIBUTE_GROUPS(uacce_dev);
+
+static void uacce_release(struct device *dev)
+{
+	struct uacce_device *uacce = to_uacce_device(dev);
+
+	kfree(uacce);
+}
+
+/* Borrowed from VFIO to fix msi translation */
+static bool uacce_iommu_has_sw_msi(struct iommu_group *group,
+				   phys_addr_t *base)
+{
+	struct list_head group_resv_regions;
+	struct iommu_resv_region *region, *next;
+	bool ret = false;
+
+	INIT_LIST_HEAD(&group_resv_regions);
+	iommu_get_group_resv_regions(group, &group_resv_regions);
+	list_for_each_entry(region, &group_resv_regions, list) {
+		/*
+		 * The presence of any 'real' MSI regions should take
+		 * precedence over the software-managed one if the
+		 * IOMMU driver happens to advertise both types.
+		 */
+		if (region->type == IOMMU_RESV_MSI) {
+			ret = false;
+			break;
+		}
+
+		if (region->type == IOMMU_RESV_SW_MSI) {
+			*base = region->start;
+			ret = true;
+		}
+	}
+
+	list_for_each_entry_safe(region, next, &group_resv_regions, list)
+		kfree(region);
+
+	return ret;
+}
+
+static int uacce_set_iommu_domain(struct uacce_device *uacce)
+{
+	struct iommu_domain *domain;
+	struct iommu_group *group;
+	struct device *dev = uacce->pdev;
+	bool resv_msi;
+	phys_addr_t resv_msi_base = 0;
+	int ret;
+
+	if (uacce->flags & UACCE_DEV_SVA)
+		return 0;
+
+	/* allocate and attach an unmanaged domain */
+	domain = iommu_domain_alloc(uacce->pdev->bus);
+	if (!domain) {
+		dev_err(&uacce->dev, "cannot get domain for iommu\n");
+		return -ENODEV;
+	}
+
+	ret = iommu_attach_device(domain, uacce->pdev);
+	if (ret)
+		goto err_with_domain;
+
+	if (iommu_capable(dev->bus, IOMMU_CAP_CACHE_COHERENCY))
+		uacce->prot |= IOMMU_CACHE;
+
+	group = iommu_group_get(dev);
+	if (!group) {
+		ret = -EINVAL;
+		goto err_with_domain;
+	}
+
+	resv_msi = uacce_iommu_has_sw_msi(group, &resv_msi_base);
+	iommu_group_put(group);
+
+	if (resv_msi) {
+		if (!irq_domain_check_msi_remap() &&
+		    !iommu_capable(dev->bus, IOMMU_CAP_INTR_REMAP)) {
+			dev_warn(dev, "No interrupt remapping support!");
+			ret = -EPERM;
+			goto err_with_domain;
+		}
+
+		ret = iommu_get_msi_cookie(domain, resv_msi_base);
+		if (ret)
+			goto err_with_domain;
+	}
+
+	return 0;
+
+err_with_domain:
+	iommu_domain_free(domain);
+	return ret;
+}
+
+static void uacce_unset_iommu_domain(struct uacce_device *uacce)
+{
+	struct iommu_domain *domain;
+
+	if (uacce->flags & UACCE_DEV_SVA)
+		return;
+
+	domain = iommu_get_domain_for_dev(uacce->pdev);
+	if (!domain) {
+		dev_err(&uacce->dev, "bug: no domain attached to device\n");
+		return;
+	}
+
+	iommu_detach_device(domain, uacce->pdev);
+	iommu_domain_free(domain);
+}
+
+/**
+ * uacce_register - register an accelerator
+ * @parent: pointer of uacce parent device
+ * @interface: pointer of uacce_interface for register
+ */
+struct uacce_device *uacce_register(struct device *parent,
+				    struct uacce_interface *interface)
+{
+	int ret;
+	struct uacce_device *uacce;
+	unsigned int flags = interface->flags;
+
+	uacce = kzalloc(sizeof(struct uacce_device), GFP_KERNEL);
+	if (!uacce)
+		return ERR_PTR(-ENOMEM);
+
+	if (flags & UACCE_DEV_SVA) {
+		ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA);
+		if (ret)
+			flags &= ~UACCE_DEV_SVA;
+	}
+
+	uacce->pdev = parent;
+	uacce->flags = flags;
+	uacce->ops = interface->ops;
+
+	ret = uacce_set_iommu_domain(uacce);
+	if (ret)
+		goto err_free;
+
+	mutex_lock(&uacce_mutex);
+
+	ret = idr_alloc(&uacce_idr, uacce, 0, 0, GFP_KERNEL);
+	if (ret < 0)
+		goto err_with_lock;
+
+	uacce->cdev = cdev_alloc();
+	uacce->cdev->ops = &uacce_fops;
+	uacce->dev_id = ret;
+	uacce->cdev->owner = THIS_MODULE;
+	device_initialize(&uacce->dev);
+	uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
+	uacce->dev.class = uacce_class;
+	uacce->dev.groups = uacce_dev_groups;
+	uacce->dev.parent = uacce->pdev;
+	uacce->dev.release = uacce_release;
+	dev_set_name(&uacce->dev, "%s-%d", interface->name, uacce->dev_id);
+	ret = cdev_device_add(uacce->cdev, &uacce->dev);
+	if (ret)
+		goto err_with_idr;
+
+	mutex_unlock(&uacce_mutex);
+
+	return uacce;
+
+err_with_idr:
+	idr_remove(&uacce_idr, uacce->dev_id);
+err_with_lock:
+	mutex_unlock(&uacce_mutex);
+	uacce_unset_iommu_domain(uacce);
+err_free:
+	if (flags & UACCE_DEV_SVA)
+		iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
+	kfree(uacce);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(uacce_register);
+
+/**
+ * uacce_unregister - unregisters an accelerator
+ * @uacce: the accelerator to unregister
+ */
+void uacce_unregister(struct uacce_device *uacce)
+{
+	if (uacce == NULL)
+		return;
+
+	mutex_lock(&uacce_mutex);
+
+	if (uacce->flags & UACCE_DEV_SVA)
+		iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
+
+	uacce_unset_iommu_domain(uacce);
+	cdev_device_del(uacce->cdev, &uacce->dev);
+	idr_remove(&uacce_idr, uacce->dev_id);
+	put_device(&uacce->dev);
+
+	mutex_unlock(&uacce_mutex);
+}
+EXPORT_SYMBOL_GPL(uacce_unregister);
+
+static int __init uacce_init(void)
+{
+	int ret;
+
+	uacce_class = class_create(THIS_MODULE, UACCE_NAME);
+	if (IS_ERR(uacce_class)) {
+		ret = PTR_ERR(uacce_class);
+		goto err;
+	}
+
+	ret = alloc_chrdev_region(&uacce_devt, 0, MINORMASK, UACCE_NAME);
+	if (ret)
+		goto err_with_class;
+
+	return 0;
+
+err_with_class:
+	class_destroy(uacce_class);
+err:
+	return ret;
+}
+
+static __exit void uacce_exit(void)
+{
+	unregister_chrdev_region(uacce_devt, MINORMASK);
+	class_destroy(uacce_class);
+	idr_destroy(&uacce_idr);
+}
+
+subsys_initcall(uacce_init);
+module_exit(uacce_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Hisilicon Tech. Co., Ltd.");
+MODULE_DESCRIPTION("Accelerator interface for Userland applications");
diff --git a/include/linux/uacce.h b/include/linux/uacce.h
new file mode 100644
index 0000000..8ce0640
--- /dev/null
+++ b/include/linux/uacce.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _LINUX_UACCE_H
+#define _LINUX_UACCE_H
+
+#include <linux/cdev.h>
+#include <uapi/misc/uacce/uacce.h>
+
+#define UACCE_NAME		"uacce"
+
+struct uacce_queue;
+struct uacce_device;
+
+/* uacce queue file flag, requires different operation */
+#define UACCE_QFRF_MAP		BIT(0)	/* map to current queue */
+#define UACCE_QFRF_MMAP		BIT(1)	/* map to user space */
+#define UACCE_QFRF_KMAP		BIT(2)	/* map to kernel space */
+#define UACCE_QFRF_DMA		BIT(3)	/* use dma api for the region */
+#define UACCE_QFRF_SELFMT	BIT(4)	/* self maintained qfr */
+
+/**
+ * struct uacce_qfile_region - structure of queue file region
+ * @type: type of the qfr
+ * @iova: iova share between user and device space
+ * @pages: pages pointer of the qfr memory
+ * @nr_pages: page numbers of the qfr memory
+ * @prot: qfr protection flag
+ * @flags: flags of qfr
+ * @qs: list sharing the same region, for ss region
+ * @kaddr: kernel addr of the qfr
+ * @dma: dma address, if created by dma api
+ */
+struct uacce_qfile_region {
+	enum uacce_qfrt type;
+	unsigned long iova;
+	struct page **pages;
+	u32 nr_pages;
+	u32 prot;
+	u32 flags;
+	struct list_head qs;
+	void *kaddr;
+	dma_addr_t dma;
+};
+
+/**
+ * struct uacce_ops - uacce device operations
+ * @get_available_instances:  get available instances left of the device
+ * @get_queue: get a queue from the device
+ * @put_queue: free a queue to the device
+ * @start_queue: make the queue start work after get_queue
+ * @stop_queue: make the queue stop work before put_queue
+ * @is_q_updated: check whether the task is finished
+ * @mask_notify: mask the task irq of queue
+ * @mmap: mmap addresses of queue to user space
+ * @reset: reset the uacce device
+ * @reset_queue: reset the queue
+ * @ioctl: ioctl for user space users of the queue
+ */
+struct uacce_ops {
+	int (*get_available_instances)(struct uacce_device *uacce);
+	int (*get_queue)(struct uacce_device *uacce, unsigned long arg,
+			 struct uacce_queue *q);
+	void (*put_queue)(struct uacce_queue *q);
+	int (*start_queue)(struct uacce_queue *q);
+	void (*stop_queue)(struct uacce_queue *q);
+	int (*is_q_updated)(struct uacce_queue *q);
+	void (*mask_notify)(struct uacce_queue *q, int event_mask);
+	int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma,
+		    struct uacce_qfile_region *qfr);
+	int (*reset)(struct uacce_device *uacce);
+	int (*reset_queue)(struct uacce_queue *q);
+	long (*ioctl)(struct uacce_queue *q, unsigned int cmd,
+		      unsigned long arg);
+};
+
+/**
+ * struct uacce_interface
+ * @name: the uacce device name.  Will show up in sysfs
+ * @flags: uacce device attributes
+ * @ops: pointer to the struct uacce_ops
+ *
+ * This structure is used for the uacce_register()
+ */
+struct uacce_interface {
+	char name[32];
+	unsigned int flags;
+	struct uacce_ops *ops;
+};
+
+enum uacce_q_state {
+	UACCE_Q_INIT,
+	UACCE_Q_STARTED,
+	UACCE_Q_ZOMBIE,
+};
+
+/**
+ * struct uacce_queue
+ * @uacce: pointer to uacce
+ * @priv: private pointer
+ * @wait: wait queue head
+ * @pasid: pasid of the queue
+ * @handle: iommu_sva handle return from iommu_sva_bind_device
+ * @list: share list for qfr->qs
+ * @mm: current->mm
+ * @qfrs: pointer of qfr regions
+ * @state: queue state machine
+ */
+struct uacce_queue {
+	struct uacce_device *uacce;
+	void *priv;
+	wait_queue_head_t wait;
+	int pasid;
+	struct iommu_sva *handle;
+	struct list_head list;
+	struct mm_struct *mm;
+	struct uacce_qfile_region *qfrs[UACCE_QFRT_MAX];
+	enum uacce_q_state state;
+};
+
+/**
+ * struct uacce_device
+ * @algs: supported algorithms
+ * @api_ver: api version
+ * @qf_pg_size: page size of the queue file regions
+ * @ops: pointer to the struct uacce_ops
+ * @pdev: pointer to the parent device
+ * @is_vf: whether virtual function
+ * @flags: uacce attributes
+ * @dev_id: id of the uacce device
+ * @prot: uacce protection flag
+ * @cdev: cdev of the uacce
+ * @dev: dev of the uacce
+ * @priv: private pointer of the uacce
+ */
+struct uacce_device {
+	const char *algs;
+	const char *api_ver;
+	unsigned long qf_pg_size[UACCE_QFRT_MAX];
+	struct uacce_ops *ops;
+	struct device *pdev;
+	bool is_vf;
+	u32 flags;
+	u32 dev_id;
+	u32 prot;
+	struct cdev *cdev;
+	struct device dev;
+	void *priv;
+};
+
+#if IS_ENABLED(CONFIG_UACCE)
+
+struct uacce_device *uacce_register(struct device *parent,
+				    struct uacce_interface *interface);
+void uacce_unregister(struct uacce_device *uacce);
+
+#else /* CONFIG_UACCE */
+
+static inline
+struct uacce_device *uacce_register(struct device *parent,
+				    struct uacce_interface *interface)
+{
+	return ERR_PTR(-ENODEV);
+}
+
+static inline void uacce_unregister(struct uacce_device *uacce) {}
+
+#endif /* CONFIG_UACCE */
+
+#endif /* _LINUX_UACCE_H */
diff --git a/include/uapi/misc/uacce/uacce.h b/include/uapi/misc/uacce/uacce.h
new file mode 100644
index 0000000..c859668
--- /dev/null
+++ b/include/uapi/misc/uacce/uacce.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _UAPIUUACCE_H
+#define _UAPIUUACCE_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#define UACCE_CMD_SHARE_SVAS	_IO('W', 0)
+#define UACCE_CMD_START		_IO('W', 1)
+#define UACCE_CMD_PUT_Q		_IO('W', 2)
+
+/**
+ * enum uacce_dev_flag: Device flags:
+ * @UACCE_DEV_SHARE_DOMAIN: no PASID, can share sva for one process
+ * @UACCE_DEV_SVA: Shared Virtual Addresses
+ *		   Support PASID
+ *		   Support device page fault (pcie device) or
+ *		   smmu stall (platform device)
+ */
+enum uacce_dev_flag {
+	UACCE_DEV_SHARE_DOMAIN = 0x0,
+	UACCE_DEV_SVA = 0x1,
+};
+
+/**
+ * enum uacce_qfrt: qfrt type
+ * @UACCE_QFRT_MMIO: device mmio region
+ * @UACCE_QFRT_DKO: device kernel-only region
+ * @UACCE_QFRT_DUS: device user share region
+ * @UACCE_QFRT_SS: static shared memory region
+ * @UACCE_QFRT_MAX: indicate the boundary
+ */
+enum uacce_qfrt {
+	UACCE_QFRT_MMIO = 0,
+	UACCE_QFRT_DKO = 1,
+	UACCE_QFRT_DUS = 2,
+	UACCE_QFRT_SS = 3,
+	UACCE_QFRT_MAX = 16,
+};
+
+#endif
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 3/3] crypto: hisilicon - register zip engine to uacce
  2019-10-16  8:34 [PATCH v6 0/3] Add uacce module for Accelerator Zhangfei Gao
  2019-10-16  8:34 ` [PATCH v6 1/3] uacce: Add documents for uacce Zhangfei Gao
  2019-10-16  8:34 ` [PATCH v6 2/3] uacce: add uacce driver Zhangfei Gao
@ 2019-10-16  8:34 ` Zhangfei Gao
  2 siblings, 0 replies; 15+ messages in thread
From: Zhangfei Gao @ 2019-10-16  8:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, jean-philippe, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang
  Cc: linux-accelerators, linux-kernel, linux-crypto, Zhangfei Gao

Register qm to uacce framework for user crypto driver

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
---
 drivers/crypto/hisilicon/qm.c           | 254 ++++++++++++++++++++++++++++++--
 drivers/crypto/hisilicon/qm.h           |  13 +-
 drivers/crypto/hisilicon/zip/zip_main.c |  39 ++---
 include/uapi/misc/uacce/qm.h            |  22 +++
 4 files changed, 292 insertions(+), 36 deletions(-)
 create mode 100644 include/uapi/misc/uacce/qm.h

diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index a8ed6990..0ffb0ad 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -9,6 +9,9 @@
 #include <linux/log2.h>
 #include <linux/seq_file.h>
 #include <linux/slab.h>
+#include <linux/uacce.h>
+#include <linux/uaccess.h>
+#include <uapi/misc/uacce/qm.h>
 #include "qm.h"
 
 /* eq/aeq irq enable */
@@ -465,17 +468,22 @@ static void qm_cq_head_update(struct hisi_qp *qp)
 
 static void qm_poll_qp(struct hisi_qp *qp, struct hisi_qm *qm)
 {
-	struct qm_cqe *cqe = qp->cqe + qp->qp_status.cq_head;
-
-	if (qp->req_cb) {
-		while (QM_CQE_PHASE(cqe) == qp->qp_status.cqc_phase) {
-			dma_rmb();
-			qp->req_cb(qp, qp->sqe + qm->sqe_size * cqe->sq_head);
-			qm_cq_head_update(qp);
-			cqe = qp->cqe + qp->qp_status.cq_head;
-			qm_db(qm, qp->qp_id, QM_DOORBELL_CMD_CQ,
-			      qp->qp_status.cq_head, 0);
-			atomic_dec(&qp->qp_status.used);
+	struct qm_cqe *cqe;
+
+	if (qp->event_cb) {
+		qp->event_cb(qp);
+	} else {
+		cqe = qp->cqe + qp->qp_status.cq_head;
+
+		if (qp->req_cb) {
+			while (QM_CQE_PHASE(cqe) == qp->qp_status.cqc_phase) {
+				dma_rmb();
+				qp->req_cb(qp, qp->sqe + qm->sqe_size *
+					   cqe->sq_head);
+				qm_cq_head_update(qp);
+				cqe = qp->cqe + qp->qp_status.cq_head;
+				atomic_dec(&qp->qp_status.used);
+			}
 		}
 
 		/* set c_flag */
@@ -1397,6 +1405,221 @@ static void hisi_qm_cache_wb(struct hisi_qm *qm)
 	}
 }
 
+static void qm_qp_event_notifier(struct hisi_qp *qp)
+{
+	wake_up_interruptible(&qp->uacce_q->wait);
+}
+
+static int hisi_qm_get_available_instances(struct uacce_device *uacce)
+{
+	int i, ret;
+	struct hisi_qm *qm = uacce->priv;
+
+	read_lock(&qm->qps_lock);
+	for (i = 0, ret = 0; i < qm->qp_num; i++)
+		if (!qm->qp_array[i])
+			ret++;
+	read_unlock(&qm->qps_lock);
+
+	return ret;
+}
+
+static int hisi_qm_uacce_get_queue(struct uacce_device *uacce,
+				   unsigned long arg,
+				   struct uacce_queue *q)
+{
+	struct hisi_qm *qm = uacce->priv;
+	struct hisi_qp *qp;
+	u8 alg_type = 0;
+
+	qp = hisi_qm_create_qp(qm, alg_type);
+	if (IS_ERR(qp))
+		return PTR_ERR(qp);
+
+	q->priv = qp;
+	q->uacce = uacce;
+	qp->uacce_q = q;
+	qp->event_cb = qm_qp_event_notifier;
+	qp->pasid = arg;
+
+	return 0;
+}
+
+static void hisi_qm_uacce_put_queue(struct uacce_queue *q)
+{
+	struct hisi_qp *qp = q->priv;
+
+	/*
+	 * As put_queue is only called in uacce_mode=1, and only one queue can
+	 * be used in this mode. we flush all sqc cache back in put queue.
+	 */
+	hisi_qm_cache_wb(qp->qm);
+
+	/* need to stop hardware, but can not support in v1 */
+	hisi_qm_release_qp(qp);
+}
+
+/* map sq/cq/doorbell to user space */
+static int hisi_qm_uacce_mmap(struct uacce_queue *q,
+			      struct vm_area_struct *vma,
+			      struct uacce_qfile_region *qfr)
+{
+	struct hisi_qp *qp = q->priv;
+	struct hisi_qm *qm = qp->qm;
+	size_t sz = vma->vm_end - vma->vm_start;
+	struct pci_dev *pdev = qm->pdev;
+	struct device *dev = &pdev->dev;
+	unsigned long vm_pgoff;
+	int ret;
+
+	switch (qfr->type) {
+	case UACCE_QFRT_MMIO:
+		if (qm->ver == QM_HW_V2) {
+			if (sz > PAGE_SIZE * (QM_DOORBELL_PAGE_NR +
+			    QM_DOORBELL_SQ_CQ_BASE_V2 / PAGE_SIZE))
+				return -EINVAL;
+		} else {
+			if (sz > PAGE_SIZE * QM_DOORBELL_PAGE_NR)
+				return -EINVAL;
+		}
+
+		vma->vm_flags |= VM_IO;
+
+		return remap_pfn_range(vma, vma->vm_start,
+				       qm->phys_base >> PAGE_SHIFT,
+				       sz, pgprot_noncached(vma->vm_page_prot));
+	case UACCE_QFRT_DUS:
+		if (sz != qp->qdma.size)
+			return -EINVAL;
+
+		/* dma_mmap_coherent() requires vm_pgoff as 0
+		 * restore vm_pfoff to initial value for mmap()
+		 */
+		vm_pgoff = vma->vm_pgoff;
+		vma->vm_pgoff = 0;
+		ret = dma_mmap_coherent(dev, vma, qp->qdma.va,
+					qp->qdma.dma, sz);
+		vma->vm_pgoff = vm_pgoff;
+		return ret;
+
+	default:
+		return -EINVAL;
+	}
+}
+
+static int hisi_qm_uacce_start_queue(struct uacce_queue *q)
+{
+	struct hisi_qp *qp = q->priv;
+
+	return hisi_qm_start_qp(qp, qp->pasid);
+}
+
+static void hisi_qm_uacce_stop_queue(struct uacce_queue *q)
+{
+	struct hisi_qp *qp = q->priv;
+
+	hisi_qm_stop_qp(qp);
+}
+
+static int qm_set_sqctype(struct uacce_queue *q, u16 type)
+{
+	struct hisi_qm *qm = q->uacce->priv;
+	struct hisi_qp *qp = q->priv;
+
+	write_lock(&qm->qps_lock);
+	qp->alg_type = type;
+	write_unlock(&qm->qps_lock);
+
+	return 0;
+}
+
+static long hisi_qm_uacce_ioctl(struct uacce_queue *q, unsigned int cmd,
+				unsigned long arg)
+{
+	struct hisi_qp *qp = q->priv;
+	struct hisi_qp_ctx qp_ctx;
+
+	if (cmd == UACCE_CMD_QM_SET_QP_CTX) {
+		if (copy_from_user(&qp_ctx, (void __user *)arg,
+				   sizeof(struct hisi_qp_ctx)))
+			return -EFAULT;
+
+		if (qp_ctx.qc_type != 0 && qp_ctx.qc_type != 1)
+			return -EINVAL;
+
+		qm_set_sqctype(q, qp_ctx.qc_type);
+		qp_ctx.id = qp->qp_id;
+
+		if (copy_to_user((void __user *)arg, &qp_ctx,
+				 sizeof(struct hisi_qp_ctx)))
+			return -EFAULT;
+	} else {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static struct uacce_ops uacce_qm_ops = {
+	.get_available_instances = hisi_qm_get_available_instances,
+	.get_queue = hisi_qm_uacce_get_queue,
+	.put_queue = hisi_qm_uacce_put_queue,
+	.start_queue = hisi_qm_uacce_start_queue,
+	.stop_queue = hisi_qm_uacce_stop_queue,
+	.mmap = hisi_qm_uacce_mmap,
+	.ioctl = hisi_qm_uacce_ioctl,
+};
+
+static int qm_register_uacce(struct hisi_qm *qm)
+{
+	struct pci_dev *pdev = qm->pdev;
+	struct uacce_device *uacce;
+	unsigned long mmio_page_nr;
+	unsigned long dus_page_nr;
+	struct uacce_interface interface = {
+		.flags = UACCE_DEV_SVA,
+		.ops = &uacce_qm_ops,
+	};
+
+	strncpy(interface.name, pdev->driver->name, sizeof(interface.name));
+
+	uacce = uacce_register(&pdev->dev, &interface);
+	if (IS_ERR(uacce))
+		return PTR_ERR(uacce);
+
+	if (uacce->flags & UACCE_DEV_SVA) {
+		qm->use_sva = true;
+	} else {
+		/* only consider sva case */
+		uacce_unregister(uacce);
+		return -EINVAL;
+	}
+
+	uacce->is_vf = pdev->is_virtfn;
+	uacce->priv = qm;
+	uacce->algs = qm->algs;
+
+	if (qm->ver == QM_HW_V1) {
+		mmio_page_nr = QM_DOORBELL_PAGE_NR;
+		uacce->api_ver = HISI_QM_API_VER_BASE;
+	} else {
+		mmio_page_nr = QM_DOORBELL_PAGE_NR +
+			QM_DOORBELL_SQ_CQ_BASE_V2 / PAGE_SIZE;
+		uacce->api_ver = HISI_QM_API_VER2_BASE;
+	}
+
+	dus_page_nr = (PAGE_SIZE - 1 + qm->sqe_size * QM_Q_DEPTH +
+		       sizeof(struct qm_cqe) * QM_Q_DEPTH) >> PAGE_SHIFT;
+
+	uacce->qf_pg_size[UACCE_QFRT_MMIO] = mmio_page_nr;
+	uacce->qf_pg_size[UACCE_QFRT_DUS]  = dus_page_nr;
+	uacce->qf_pg_size[UACCE_QFRT_SS]   = 0;
+
+	qm->uacce = uacce;
+
+	return 0;
+}
+
 /**
  * hisi_qm_init() - Initialize configures about qm.
  * @qm: The qm needing init.
@@ -1421,6 +1644,10 @@ int hisi_qm_init(struct hisi_qm *qm)
 		return -EINVAL;
 	}
 
+	ret = qm_register_uacce(qm);
+	if (ret < 0)
+		dev_warn(&pdev->dev, "fail to register uacce (%d)\n", ret);
+
 	ret = pci_enable_device_mem(pdev);
 	if (ret < 0) {
 		dev_err(&pdev->dev, "Failed to enable device mem!\n");
@@ -1433,6 +1660,8 @@ int hisi_qm_init(struct hisi_qm *qm)
 		goto err_disable_pcidev;
 	}
 
+	qm->phys_base = pci_resource_start(pdev, PCI_BAR_2);
+	qm->size = pci_resource_len(qm->pdev, PCI_BAR_2);
 	qm->io_base = ioremap(pci_resource_start(pdev, PCI_BAR_2),
 			      pci_resource_len(qm->pdev, PCI_BAR_2));
 	if (!qm->io_base) {
@@ -1504,6 +1733,9 @@ void hisi_qm_uninit(struct hisi_qm *qm)
 	iounmap(qm->io_base);
 	pci_release_mem_regions(pdev);
 	pci_disable_device(pdev);
+
+	if (qm->uacce)
+		uacce_unregister(qm->uacce);
 }
 EXPORT_SYMBOL_GPL(hisi_qm_uninit);
 
diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h
index 103e2fd..84a3be9 100644
--- a/drivers/crypto/hisilicon/qm.h
+++ b/drivers/crypto/hisilicon/qm.h
@@ -77,6 +77,10 @@
 
 #define HISI_ACC_SGL_SGE_NR_MAX		255
 
+/* page number for queue file region */
+#define QM_DOORBELL_PAGE_NR		1
+
+
 enum qp_state {
 	QP_STOP,
 };
@@ -161,7 +165,12 @@ struct hisi_qm {
 	u32 error_mask;
 	u32 msi_mask;
 
+	const char *algs;
 	bool use_dma_api;
+	bool use_sva;
+	resource_size_t phys_base;
+	resource_size_t size;
+	struct uacce_device *uacce;
 };
 
 struct hisi_qp_status {
@@ -191,10 +200,12 @@ struct hisi_qp {
 	struct hisi_qp_ops *hw_ops;
 	void *qp_ctx;
 	void (*req_cb)(struct hisi_qp *qp, void *data);
+	void (*event_cb)(struct hisi_qp *qp);
 	struct work_struct work;
 	struct workqueue_struct *wq;
-
 	struct hisi_qm *qm;
+	u16 pasid;
+	struct uacce_queue *uacce_q;
 };
 
 int hisi_qm_init(struct hisi_qm *qm);
diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
index 1b2ee96..48860d2 100644
--- a/drivers/crypto/hisilicon/zip/zip_main.c
+++ b/drivers/crypto/hisilicon/zip/zip_main.c
@@ -316,8 +316,14 @@ static void hisi_zip_set_user_domain_and_cache(struct hisi_zip *hisi_zip)
 	writel(AXUSER_BASE, base + HZIP_BD_RUSER_32_63);
 	writel(AXUSER_BASE, base + HZIP_SGL_RUSER_32_63);
 	writel(AXUSER_BASE, base + HZIP_BD_WUSER_32_63);
-	writel(AXUSER_BASE, base + HZIP_DATA_RUSER_32_63);
-	writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63);
+
+	if (hisi_zip->qm.use_sva) {
+		writel(AXUSER_BASE | AXUSER_SSV, base + HZIP_DATA_RUSER_32_63);
+		writel(AXUSER_BASE | AXUSER_SSV, base + HZIP_DATA_WUSER_32_63);
+	} else {
+		writel(AXUSER_BASE, base + HZIP_DATA_RUSER_32_63);
+		writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63);
+	}
 
 	/* let's open all compression/decompression cores */
 	writel(DECOMP_CHECK_ENABLE | ALL_COMP_DECOMP_EN,
@@ -671,24 +677,12 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	qm = &hisi_zip->qm;
 	qm->pdev = pdev;
 	qm->ver = rev_id;
-
+	qm->use_dma_api = true;
+	qm->algs = "zlib\ngzip\n";
 	qm->sqe_size = HZIP_SQE_SIZE;
 	qm->dev_name = hisi_zip_name;
 	qm->fun_type = (pdev->device == PCI_DEVICE_ID_ZIP_PF) ? QM_HW_PF :
 								QM_HW_VF;
-	switch (uacce_mode) {
-	case 0:
-		qm->use_dma_api = true;
-		break;
-	case 1:
-		qm->use_dma_api = false;
-		break;
-	case 2:
-		qm->use_dma_api = true;
-		break;
-	default:
-		return -EINVAL;
-	}
 
 	ret = hisi_qm_init(qm);
 	if (ret) {
@@ -976,12 +970,10 @@ static int __init hisi_zip_init(void)
 		goto err_pci;
 	}
 
-	if (uacce_mode == 0 || uacce_mode == 2) {
-		ret = hisi_zip_register_to_crypto();
-		if (ret < 0) {
-			pr_err("Failed to register driver to crypto.\n");
-			goto err_crypto;
-		}
+	ret = hisi_zip_register_to_crypto();
+	if (ret < 0) {
+		pr_err("Failed to register driver to crypto.\n");
+		goto err_crypto;
 	}
 
 	return 0;
@@ -996,8 +988,7 @@ static int __init hisi_zip_init(void)
 
 static void __exit hisi_zip_exit(void)
 {
-	if (uacce_mode == 0 || uacce_mode == 2)
-		hisi_zip_unregister_from_crypto();
+	hisi_zip_unregister_from_crypto();
 	pci_unregister_driver(&hisi_zip_pci_driver);
 	hisi_zip_unregister_debugfs();
 }
diff --git a/include/uapi/misc/uacce/qm.h b/include/uapi/misc/uacce/qm.h
new file mode 100644
index 0000000..08f1c79
--- /dev/null
+++ b/include/uapi/misc/uacce/qm.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#ifndef HISI_QM_USR_IF_H
+#define HISI_QM_USR_IF_H
+
+#include <linux/types.h>
+
+/**
+ * struct hisi_qp_ctx - User data for hisi qp.
+ * @id: Specifies which Turbo decode algorithm to use
+ * @qc_type: Accelerator algorithm type
+ */
+struct hisi_qp_ctx {
+	__u16 id;
+	__u16 qc_type;
+};
+
+#define HISI_QM_API_VER_BASE "hisi_qm_v1"
+#define HISI_QM_API_VER2_BASE "hisi_qm_v2"
+
+#define UACCE_CMD_QM_SET_QP_CTX	_IOWR('H', 10, struct hisi_qp_ctx)
+
+#endif
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
  2019-10-16  8:34 ` [PATCH v6 2/3] uacce: add uacce driver Zhangfei Gao
@ 2019-10-16 17:28   ` Jean-Philippe Brucker
       [not found]     ` <5da9a9cd.1c69fb81.9f8e8.60faSMTPIN_ADDED_BROKEN@mx.google.com>
  2019-10-23 16:58     ` Jerome Glisse
  2019-10-22 18:49   ` Jerome Glisse
  1 sibling, 2 replies; 15+ messages in thread
From: Jean-Philippe Brucker @ 2019-10-16 17:28 UTC (permalink / raw)
  To: Zhangfei Gao
  Cc: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, ilias.apalodimas, francois.ozog, kenneth-lee-2012,
	Wangzhou, haojian . zhuang, linux-accelerators, linux-kernel,
	linux-crypto, Kenneth Lee, Zaibo Xu

Hi,

I have a few comments on the overall design and some implementation
details below.

Could you also Cc iommu@lists.linux-foundation.org on your next posting?
I'm sure some subscribers would be interested and I don't think many
people know about linux-accelerators yet.

On Wed, Oct 16, 2019 at 04:34:32PM +0800, Zhangfei Gao wrote:
> diff --git a/Documentation/ABI/testing/sysfs-driver-uacce b/Documentation/ABI/testing/sysfs-driver-uacce
> new file mode 100644
> index 0000000..e48333c
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-driver-uacce
> @@ -0,0 +1,65 @@
> +What:           /sys/class/uacce/hisi_zip-<n>/id

Should probably be /sys/class/uacce/<dev_name>/ if we want the API to be
used by other drivers.

[...]
> +static int uacce_queue_map_qfr(struct uacce_queue *q,
> +			       struct uacce_qfile_region *qfr)
> +{
> +	struct device *dev = q->uacce->pdev;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	int i, j, ret;
> +
> +	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
> +		return 0;
> +
> +	if (!domain)
> +		return -ENODEV;
> +
> +	for (i = 0; i < qfr->nr_pages; i++) {
> +		ret = iommu_map(domain, qfr->iova + i * PAGE_SIZE,
> +				page_to_phys(qfr->pages[i]),
> +				PAGE_SIZE, qfr->prot | q->uacce->prot);
> +		if (ret)
> +			goto err_with_map_pages;
> +
> +		get_page(qfr->pages[i]);

I guess we need this reference when coming from UACCE_CMD_SHARE_SVAS?
Otherwise we should already get one from alloc_page().

[...]
> +static int uacce_qfr_alloc_pages(struct uacce_qfile_region *qfr)
> +{
> +	int i, j;
> +
> +	qfr->pages = kcalloc(qfr->nr_pages, sizeof(*qfr->pages), GFP_ATOMIC);

Why GFP_ATOMIC and not GFP_KERNEL?  GFP_ATOMIC is used all over this file
but there doesn't seem to be any non-sleepable context.

> +	if (!qfr->pages)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < qfr->nr_pages; i++) {
> +		qfr->pages[i] = alloc_page(GFP_ATOMIC | __GFP_ZERO);

Is it worth copying __iommu_dma_alloc_pages() here - using
alloc_pages_node() to allocate memory close to the device and to allocate
compound pages if possible?

Also, do we need GFP_USER here?


More generally, it would be nice to use the DMA API when SVA isn't
supported, instead of manually allocating and mapping memory with
iommu_map(). Do we only handcraft these functions in order to have VA ==
IOVA?  On its own it doesn't seem like a strong enough reason to avoid the
DMA API.

SVA simplifies DMA memory management and enables core mm features for DMA
such as demand paging. VA == IOVA is just a natural consequence. But in
the !SVA mode, the userspace library does need to create DMA mappings
itself. So, since it has special cases for !SVA, it could easily get the
IOVA of a DMA buffer from the kernel using another ioctl.

[...]
> +static struct uacce_qfile_region *
> +uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma,
> +		    enum uacce_qfrt type, unsigned int flags)
> +{
> +	struct uacce_qfile_region *qfr;
> +	struct uacce_device *uacce = q->uacce;
> +	unsigned long vm_pgoff;
> +	int ret = -ENOMEM;
> +
> +	qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC);
> +	if (!qfr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	qfr->type = type;
> +	qfr->flags = flags;
> +	qfr->iova = vma->vm_start;
> +	qfr->nr_pages = vma_pages(vma);
> +
> +	if (vma->vm_flags & VM_READ)
> +		qfr->prot |= IOMMU_READ;
> +
> +	if (vma->vm_flags & VM_WRITE)
> +		qfr->prot |= IOMMU_WRITE;
> +
> +	if (flags & UACCE_QFRF_SELFMT) {
> +		if (!uacce->ops->mmap) {
> +			ret = -EINVAL;
> +			goto err_with_qfr;
> +		}
> +
> +		ret = uacce->ops->mmap(q, vma, qfr);
> +		if (ret)
> +			goto err_with_qfr;
> +		return qfr;
> +	}

I wish the SVA and !SVA paths were less interleaved. Both models are
fundamentally different:

* Without SVA you cannot share the device between multiple processes. All
  DMA mappings are in the "main", non-PASID address space of the device.

  Note that process isolation without SVA could be achieved with the
  auxiliary domains IOMMU API (introduced primarily for vfio-mdev) but
  this is not the model chosen here.

* With SVA you can share the device between multiple processes. But if the
  process can somehow program its portion of the device to access the main
  address space, you loose isolation. Only the kernel must be able to
  program and access the main address space.

When interleaving both code paths it's easy to make a mistake and loose
this isolation. Although I think this code is correct, it took me some
time to understand that we never end up calling dma_alloc or iommu_map
when using SVA. Might be worth at least adding a check that if
UACCE_DEV_SVA, then we never end up in the bottom part of this function.

> +
> +	/* allocate memory */
> +	if (flags & UACCE_QFRF_DMA) {

At the moment UACCE_QFRF_DMA is never set, so there is a lot of unused and
possibly untested code in this file. I think it would be simpler to choose
between either DMA API or unmanaged IOMMU domains and stick with it. As
said before, I'd prefer DMA API.

> +		qfr->kaddr = dma_alloc_coherent(uacce->pdev,
> +						qfr->nr_pages << PAGE_SHIFT,
> +						&qfr->dma, GFP_KERNEL);
> +		if (!qfr->kaddr) {
> +			ret = -ENOMEM;
> +			goto err_with_qfr;
> +		}
> +	} else {
> +		ret = uacce_qfr_alloc_pages(qfr);
> +		if (ret)
> +			goto err_with_qfr;
> +	}
> +
> +	/* map to device */
> +	ret = uacce_queue_map_qfr(q, qfr);

Worth moving into the else above.

[...]
> +static long uacce_cmd_share_qfr(struct uacce_queue *tgt, int fd)
> +{
> +	struct file *filep;
> +	struct uacce_queue *src;
> +	int ret = -EINVAL;
> +
> +	mutex_lock(&uacce_mutex);
> +
> +	if (tgt->state != UACCE_Q_STARTED)
> +		goto out_with_lock;
> +
> +	filep = fget(fd);
> +	if (!filep)
> +		goto out_with_lock;
> +
> +	if (filep->f_op != &uacce_fops)
> +		goto out_with_fd;
> +
> +	src = filep->private_data;
> +	if (!src)
> +		goto out_with_fd;
> +
> +	if (tgt->uacce->flags & UACCE_DEV_SVA)
> +		goto out_with_fd;
> +
> +	if (!src->qfrs[UACCE_QFRT_SS] || tgt->qfrs[UACCE_QFRT_SS])
> +		goto out_with_fd;
> +
> +	ret = uacce_queue_map_qfr(tgt, src->qfrs[UACCE_QFRT_SS]);

I don't understand what this ioctl does. The function duplicates the
static mappings from one queue to another, right?  But static mappings are
a !SVA thing and currently on !SVA a single queue can be opened at a time.
In addition, unless the two queues belong to different devices, they would
share the same IOMMU domain and the mappings would already exist, so you
don't need to call uacce_queue_map_qfr() again.

[...]
> +static long uacce_put_queue(struct uacce_queue *q)
> +{
> +	struct uacce_device *uacce = q->uacce;
> +
> +	mutex_lock(&uacce_mutex);
> +
> +	if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
> +		uacce->ops->stop_queue(q);
> +
> +	if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) &&
> +	     uacce->ops->put_queue)
> +		uacce->ops->put_queue(q);
> +
> +	q->state = UACCE_Q_ZOMBIE;

Since the PUT_Q ioctl makes the queue unrecoverable, why should userspace
invoke it instead of immediately calling close()?

[...]
> +static int uacce_dev_open_check(struct uacce_device *uacce)
> +{
> +	if (uacce->flags & UACCE_DEV_SVA)
> +		return 0;
> +
> +	/*
> +	 * The device can be opened once if it does not support pasid
> +	 */
> +	if (kref_read(&uacce->cdev->kobj.kref) > 2)

Why 2?  It doesn't feel right to access the cdev internals for this, could
we just have a ref uacce->opened for this purpose?

> +		return -EBUSY;
> +
> +	return 0;
> +}
> +
> +static int uacce_fops_open(struct inode *inode, struct file *filep)
> +{
> +	struct uacce_queue *q;
> +	struct iommu_sva *handle = NULL;
> +	struct uacce_device *uacce;
> +	int ret;
> +	int pasid = 0;
> +
> +	uacce = idr_find(&uacce_idr, iminor(inode));
> +	if (!uacce)
> +		return -ENODEV;
> +
> +	if (!try_module_get(uacce->pdev->driver->owner))
> +		return -ENODEV;
> +
> +	ret = uacce_dev_open_check(uacce);
> +	if (ret)
> +		goto out_with_module;
> +
> +	if (uacce->flags & UACCE_DEV_SVA) {
> +		handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL);
> +		if (IS_ERR(handle))
> +			goto out_with_module;
> +		pasid = iommu_sva_get_pasid(handle);

We need to register an mm_exit callback. Once we return, userspace will
start running jobs on the accelerator. If the process is killed while the
accelerator is running, the mm_exit callback tells the device driver to
stop using this PASID (stop_queue()), so that it can be reallocated for
another process.

Implementing this with the right locking and ordering can be tricky. I'll
try to implement the callback and test it on the device this week.

[...]
> +static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
> +{
> +	struct uacce_queue *q = filep->private_data;
> +	struct uacce_device *uacce = q->uacce;
> +	struct uacce_qfile_region *qfr;
> +	enum uacce_qfrt type = 0;
> +	unsigned int flags = 0;
> +	int ret;
> +
> +	if (vma->vm_pgoff < UACCE_QFRT_MAX)
> +		type = vma->vm_pgoff;
> +
> +	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> +
> +	mutex_lock(&uacce_mutex);
> +
> +	/* fixme: if the region need no pages, we don't need to check it */
> +	if (q->mm->data_vm + vma_pages(vma) >
> +	    rlimit(RLIMIT_DATA) >> PAGE_SHIFT) {

Doesn't may_expand_vm() do the job already?

> +		ret = -ENOMEM;
> +		goto out_with_lock;
> +	}
> +
> +	if (q->qfrs[type]) {
> +		ret = -EBUSY;
> +		goto out_with_lock;
> +	}
> +
> +	switch (type) {
> +	case UACCE_QFRT_MMIO:
> +		flags = UACCE_QFRF_SELFMT;
> +		break;
> +
> +	case UACCE_QFRT_SS:
> +		if (q->state != UACCE_Q_STARTED) {
> +			ret = -EINVAL;
> +			goto out_with_lock;
> +		}
> +
> +		if (uacce->flags & UACCE_DEV_SVA) {
> +			ret = -EINVAL;
> +			goto out_with_lock;
> +		}
> +
> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
> +
> +		break;
> +
> +	case UACCE_QFRT_DKO:
> +		if (uacce->flags & UACCE_DEV_SVA) {
> +			ret = -EINVAL;
> +			goto out_with_lock;
> +		}
> +
> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_KMAP;
> +
> +		break;
> +
> +	case UACCE_QFRT_DUS:
> +		if (uacce->flags & UACCE_DEV_SVA) {
> +			flags = UACCE_QFRF_SELFMT;
> +			break;
> +		}
> +
> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
> +		break;
> +
> +	default:
> +		WARN_ON(&uacce->dev);
> +		break;
> +	}
> +
> +	qfr = uacce_create_region(q, vma, type, flags);

Don't we need to setup a a vma->vm_ops->close callback, to remove this
region on munmap()?

> +	if (IS_ERR(qfr)) {
> +		ret = PTR_ERR(qfr);
> +		goto out_with_lock;
> +	}
> +	q->qfrs[type] = qfr;
> +
> +	if (type == UACCE_QFRT_SS) {
> +		INIT_LIST_HEAD(&qfr->qs);
> +		list_add(&q->list, &q->qfrs[type]->qs);
> +	}
> +
> +	mutex_unlock(&uacce_mutex);
> +
> +	if (qfr->pages)
> +		q->mm->data_vm += qfr->nr_pages;

This too should be done by the core already.

[...]
> +/* Borrowed from VFIO to fix msi translation */
> +static bool uacce_iommu_has_sw_msi(struct iommu_group *group,

Sharing the same functions would be nicer.

[...]
> +struct uacce_device *uacce_register(struct device *parent,
> +				    struct uacce_interface *interface)
> +{
> +	int ret;
> +	struct uacce_device *uacce;
> +	unsigned int flags = interface->flags;
> +
> +	uacce = kzalloc(sizeof(struct uacce_device), GFP_KERNEL);
> +	if (!uacce)
> +		return ERR_PTR(-ENOMEM);
> +
> +	if (flags & UACCE_DEV_SVA) {
> +		ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA);
> +		if (ret)
> +			flags &= ~UACCE_DEV_SVA;
> +	}
> +
> +	uacce->pdev = parent;
> +	uacce->flags = flags;
> +	uacce->ops = interface->ops;
> +
> +	ret = uacce_set_iommu_domain(uacce);
> +	if (ret)
> +		goto err_free;
> +
> +	mutex_lock(&uacce_mutex);
> +
> +	ret = idr_alloc(&uacce_idr, uacce, 0, 0, GFP_KERNEL);
> +	if (ret < 0)
> +		goto err_with_lock;
> +
> +	uacce->cdev = cdev_alloc();

Need to check the return value.

> +	uacce->cdev->ops = &uacce_fops;
> +	uacce->dev_id = ret;
> +	uacce->cdev->owner = THIS_MODULE;
> +	device_initialize(&uacce->dev);
> +	uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
> +	uacce->dev.class = uacce_class;
> +	uacce->dev.groups = uacce_dev_groups;
> +	uacce->dev.parent = uacce->pdev;
> +	uacce->dev.release = uacce_release;
> +	dev_set_name(&uacce->dev, "%s-%d", interface->name, uacce->dev_id);
> +	ret = cdev_device_add(uacce->cdev, &uacce->dev);
> +	if (ret)
> +		goto err_with_idr;
> +
> +	mutex_unlock(&uacce_mutex);

We published the new device into /dev/ and /sys/ even though the uacce
structure has yet to be completed by the caller (for example qf_pg_size,
api_ver, etc). Maybe we can add an initializer to uacce_ops so we can
publish a complete structure?

> +
> +	return uacce;
> +
> +err_with_idr:
> +	idr_remove(&uacce_idr, uacce->dev_id);
> +err_with_lock:
> +	mutex_unlock(&uacce_mutex);
> +	uacce_unset_iommu_domain(uacce);
> +err_free:
> +	if (flags & UACCE_DEV_SVA)
> +		iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
> +	kfree(uacce);
> +	return ERR_PTR(ret);
> +}
> +EXPORT_SYMBOL_GPL(uacce_register);
> +
> +/**
> + * uacce_unregister - unregisters an accelerator
> + * @uacce: the accelerator to unregister
> + */
> +void uacce_unregister(struct uacce_device *uacce)
> +{
> +	if (uacce == NULL)
> +		return;
> +
> +	mutex_lock(&uacce_mutex);

Are we certain that no open queue remains?

> +
> +	if (uacce->flags & UACCE_DEV_SVA)
> +		iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
> +
> +	uacce_unset_iommu_domain(uacce);
> +	cdev_device_del(uacce->cdev, &uacce->dev);
> +	idr_remove(&uacce_idr, uacce->dev_id);
> +	put_device(&uacce->dev);
> +
> +	mutex_unlock(&uacce_mutex);
> +}
> +EXPORT_SYMBOL_GPL(uacce_unregister);

> diff --git a/include/uapi/misc/uacce/uacce.h b/include/uapi/misc/uacce/uacce.h
> new file mode 100644
> index 0000000..c859668
> --- /dev/null
> +++ b/include/uapi/misc/uacce/uacce.h
> @@ -0,0 +1,41 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */

Needs to be 
/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */

Otherwise headers_install.sh complains on v5.4 (same for the qm UAPI in
patch 3)

> +#ifndef _UAPIUUACCE_H
> +#define _UAPIUUACCE_H
> +
> +#include <linux/types.h>
> +#include <linux/ioctl.h>
> +
> +#define UACCE_CMD_SHARE_SVAS	_IO('W', 0)
> +#define UACCE_CMD_START		_IO('W', 1)
> +#define UACCE_CMD_PUT_Q		_IO('W', 2)

These must be documented.

> +
> +/**
> + * enum uacce_dev_flag: Device flags:
> + * @UACCE_DEV_SHARE_DOMAIN: no PASID, can share sva for one process
> + * @UACCE_DEV_SVA: Shared Virtual Addresses
> + *		   Support PASID
> + *		   Support device page fault (pcie device) or
> + *		   smmu stall (platform device)

Both stall and PRI are device page faults, so this could say "Support
device page faults (PCI PRI or SMMU Stall)".

> + */
> +enum uacce_dev_flag {
> +	UACCE_DEV_SHARE_DOMAIN = 0x0,
> +	UACCE_DEV_SVA = 0x1,
> +};

This is a bitmap so UACCE_DEV_SHARE_DOMAIN will loose its meaning when
adding a new flag. There will be:

	UACCE_DEV_SVA		= 1 << 0,
	UACCE_DEV_NEWFEATURE	= 1 << 1,

Then a value of zero will simply mean "no special feature". I think we
could simply remove UACCE_DEV_SHARE_DOMAIN now, it's not used.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 1/3] uacce: Add documents for uacce
  2019-10-16  8:34 ` [PATCH v6 1/3] uacce: Add documents for uacce Zhangfei Gao
@ 2019-10-16 18:36   ` Jean-Philippe Brucker
       [not found]     ` <5da81d06.1c69fb81.395d6.c080SMTPIN_ADDED_BROKEN@mx.google.com>
  0 siblings, 1 reply; 15+ messages in thread
From: Jean-Philippe Brucker @ 2019-10-16 18:36 UTC (permalink / raw)
  To: Zhangfei Gao
  Cc: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, ilias.apalodimas, francois.ozog, kenneth-lee-2012,
	Wangzhou, haojian . zhuang, linux-accelerators, linux-kernel,
	linux-crypto, Kenneth Lee, Zaibo Xu

Hi,

I already commented on the interface in patch 2/3, so I just have a few
additional comments on the documentation itself.

On Wed, Oct 16, 2019 at 04:34:31PM +0800, Zhangfei Gao wrote:
> +The user API
> +------------
> +
> +We adopt a polling style interface in the user space: ::
> +
> +	int wd_request_queue(struct wd_queue *q);
> +	void wd_release_queue(struct wd_queue *q);
> +	int wd_send(struct wd_queue *q, void *req);
> +	int wd_recv(struct wd_queue *q, void **req);
> +	int wd_recv_sync(struct wd_queue *q, void **req);
> +	void wd_flush(struct wd_queue *q);
> +
> +wd_recv_sync() is a wrapper to its non-sync version. It will trap into
> +kernel and wait until the queue become available.
> +
> +If the queue do not support SVA/SVM. The following helper functions
> +can be used to create Static Virtual Share Memory: ::
> +
> +	void *wd_reserve_memory(struct wd_queue *q, size_t size);
> +	int wd_share_reserved_memory(struct wd_queue *q,
> +				     struct wd_queue *target_q);
> +
> +The user API is not mandatory. It is simply a suggestion and hint what the
> +kernel interface is supposed to be.

Maybe move this to the beginning of the section, to make it clear that
you're describing an example API. On first read I found it odd that we're
documenting a userspace library in this document.

[...]
> +
> +The Memory Sharing Model
> +------------------------
> +The perfect form of a Uacce device is to support SVM/SVA. We built this upon
> +Jean Philippe Brucker's SVA patches. [1]

I don't think this belongs in the doc, more on a cover letter. Since the
SVA API is now upstream (implementation in progress), you could simply say
something like "the uacce device is built around the IOMMU SVA API". 

> +
> +If the hardware support UACCE_DEV_SVA, the user process's page table is
> +shared to the opened queue. So the device can access any address in the
> +process address space.
> +And it can raise a page fault if the physical page is not available yet.
> +It can also access the address in the kernel space, which is referred by
> +another page table particular to the kernel. Most of IOMMU implementation can
> +handle this by a tag on the address request of the device. For example, ARM
> +SMMU uses SSV bit to indicate that the address request is for kernel or user
> +space.

That might be a bit too detailed, you can just say that the device can
access multiple address spaces, including the one without PASID. All IOMMU
architectures with PASID support this now.

> +Queue file regions can be used:
> +UACCE_QFRT_MMIO: device mmio region (map to user)
> +UACCE_QFRT_DUS: device user share (map to dev and user)
> +
> +If the device does not support UACCE_DEV_SVA, Uacce allow only one process at
> +the same time. DMA API cannot be used as well, since Uacce will create an
> +unmanaged iommu_domain for the device.
> +Queue file regions can be used:
> +UACCE_QFRT_MMIO: device mmio region (map to user)
> +UACCE_QFRT_DKO: device kernel-only (map to dev, no user)
> +UACCE_QFRT_DUS: device user share (map to dev and user)
> +UACCE_QFRT_SS:  static shared memory (map to devs and user)
> +
> +
> +The Fork Scenario
> +=================
> +For a process with allocated queues and shared memory, what happen if it forks
> +a child?
> +
> +The fd of the queue will be duplicated on folk, so the child can send request
> +to the same queue as its parent. But the requests which is sent from processes
> +except for the one who opens the queue will be blocked.

Would it be correct and clearer to say "The fd of the queue is duplicated
on fork, but requests sent from the child process are blocked"?

> +
> +It is recommended to add O_CLOEXEC to the queue file.
> +
> +The queue mmap space has a VM_DONTCOPY in its VMA. So the child will lose all
> +those VMAs.
> +
> +This is a reason why Uacce does not adopt the mode used in VFIO and
> +InfiniBand.  Both solutions can set any user pointer for hardware sharing.
> +But they cannot support fork when the dma is in process. Or the
> +"Copy-On-Write" procedure will make the parent process lost its physical
> +pages.
> +
> +
> +Difference to the VFIO and IB framework
> +---------------------------------------
> +The essential function of Uacce is to let the device access the user
> +address directly. There are many device drivers doing the same in the kernel.
> +And both VFIO and IB can provide similar functions in framework level.
> +
> +But Uacce has a different goal: "share address space". It is
> +not taken the request to the accelerator as an enclosure data structure. It
> +takes the accelerator as another thread of the same process. So the
> +accelerator can refer to any address used by the process.
> +
> +Both VFIO and IB are taken this as "memory sharing", not "address sharing".
> +They care more on sharing the block of memory. But if there is an address
> +stored in the block and referring to another memory region. The address may
> +not be valid.
> +
> +By adding more constraints to the VFIO and IB framework, in some sense, we may
> +achieve a similar goal. But we gave it up finally. Both VFIO and IB have extra
> +assumption which is unnecessary to Uacce. They may hurt each other if we
> +try to merge them together.

I don't know if this particular rationale belongs here rather than a cover
letter, but some of this section can be useful to let users decide if they
need uacce or VFIO.

For the record I'm still not entirely convinced that a new solution is
preferable to vfio-mdev.
* Existing userspace drivers such as DPDK may someday benefit from
  adding SVA support to VFIO.
* Patch 2/3 does seem to duplicate a lot of VFIO code for the !SVA mode.
  I'd rather we avoided !SVA support altogether at first, to make the code
  simpler.
* The issue with fork should be fixed in VFIO anyway, if it's an actual
  concern for userspace drivers.

On the other hand, I do agree with the following paragraph that a lighter
solution such as uacce focusing on shared address space and queues could
mean less work for device drivers and libraries.

It would be interesting to write a device driver prototype that implements
both vfio-mdev (with added SVA support) and uacce interfaces and compare
them. But since I'm not the one writing this or the corresponding
userspace libs, I'll stop advocating vfio-mdev next time and focus on the
implementation details :)

Thanks,
Jean

> +
> +VFIO manages resource of a hardware as a "virtual device". If a device need to
> +serve a separated application. It must isolate the resource as a separate
> +virtual device.  And the life cycle of the application and virtual device are
> +unnecessary unrelated. And most concepts, such as bus, driver, probe and
> +so on, to make it as a "device" is unnecessary either. And the logic added to
> +VFIO to make address sharing do no help on "creating a virtual device".
> +
> +IB creates a "verbs" standard for sharing memory region to another remote
> +entity.  Most of these verbs are to make memory region between entities to be
> +synchronized.  This is not what accelerator need. Accelerator is in the same
> +memory system with the CPU. It refers to the same memory system among CPU and
> +devices. So the local memory terms/verbs are good enough for it. Extra "verbs"
> +are not necessary. And its queue (like queue pair in IB) is the communication
> +channel direct to the accelerator hardware. There is nothing about memory
> +itself.
> +
> +Further, both VFIO and IB use the "pin" (get_user_page) way to lock local
> +memory in place.  This is flexible. But it can cause other problems. For
> +example, if the user process fork a child process. The COW procedure may make
> +the parent process lost its pages which are sharing with the device. These may
> +be fixed in the future. But is not going to be easy. (There is a discussion
> +about this on Linux Plumbers Conference 2018 [2])
> +
> +So we choose to build the solution directly on top of IOMMU interface. IOMMU
> +is the essential way for device and process to share their page mapping from
> +the hardware perspective. It will be safe to create a software solution on
> +this assumption.  Uacce manages the IOMMU interface for the accelerator
> +device, so the device driver can export some of the resources to the user
> +space. Uacce than can make sure the device and the process have the same
> +address space.
> +
> +
> +References
> +==========
> +.. [1] http://jpbrucker.net/sva/
> +.. [2] https://lwn.net/Articles/774411/
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 1/3] uacce: Add documents for uacce
       [not found]     ` <5da81d06.1c69fb81.395d6.c080SMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-21 13:34       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 15+ messages in thread
From: Jean-Philippe Brucker @ 2019-10-21 13:34 UTC (permalink / raw)
  To: Kenneth Lee
  Cc: Zhangfei Gao, Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu,
	jonathan.cameron, grant.likely, ilias.apalodimas, francois.ozog,
	Wangzhou, haojian . zhuang, linux-accelerators, linux-kernel,
	linux-crypto, Kenneth Lee, Zaibo Xu

Hi Kenneth,

On Thu, Oct 17, 2019 at 03:49:07PM +0800, Kenneth Lee wrote:
> Dear Jean,
> 
> Please let me answer your question about why we build another subsystem
> other than use vfio-mdev.
> 
> I think you might remember that we did build WarpDrive on top of
> vfio-mdev from the very beginning.

Right thanks for reminding me, I had forgotten about the first RFCs.

> Both RFCv1, Share Parent IOMMU mdev, and the RFCv2, Share Domain mdev,
> are based on mdev. We got many comments and we finally felt we could not
> solve all of them if we continued the mdev directory.
> 
> I think the key problem here is that mdev is a virtual *device*. So, 
> 
> 1. As you have said, this creates more logic which is useless for
>    accelerator. For example, it gives the user the full control of DMA
>    and irq, it replays the dma mapping for new attach device and it
>    create VFIO IOMMU drivers... These are necessary to simulate a raw
>    device to a virtual machine. But it is not necessary to an
>    accelerator.
> 
> 2. You are forced to separate the resource to a device before use it.
>    And if the user process crash, we need extra facility to put it back
>    to the resource pool.

I don't understand the difference between vfio-mdev and uacce in this
context. An example may help. If you want to give direct access of a bit
of hardware to userspace, you necessarily need to isolate any resource
associated to that partition from other processes and from the kernel.
Namely create a DMA address space (SVA or AUXD), allocate an MMIO frame
and an interrupt (although IRQs are still handled by the kernel and could
be shared). And then you need to release those resources back into the
pool when the process exits or crashes.

> 3. Though Alex Williamson argues that vfio is not just used for
>    virtualisation. But it is indeed used only by virtualisation for the
>    time being.

There is DPDK, that implements userspace drivers for net and crypto using
vfio-pci, and will likely gain support for vfio-mdev soon. However
similarly to Qemu they are self-contained and can easily deal with the
fork problem, unlike a decompression library for example, that could be
included by any application.

In any case I agree with your point 1. that a simpler user interface might
be beneficial. Perhaps DPDK could support uacce as well later.

Thanks,
Jean

>    And Jerome Glisse (also from Redhat) said he could
>    accept some problem from a virtual machine because we can constrain
>    the behavior of virtual machine program (such as qemu) . But he could
>    not accept that happened in an general application. For example, if
>    you pin the memory in a process and then fork, you may lost the
>    physical page due to COW. We can solve the problem in uacce by
>    letting go the shared pages in the child.
> 
> So we think we should not continue the mdev direction for uacce. Even we
> can merge the logic together. It we become a burden for both vfio and
> uacce. Both of them make use of IOMMU, so of course they will have some
> code similar. But they go to different direction. VFIO is trying to
> export virtual DMA and irq to the user space. While uacce is trying to
> let the accelerator share address with the process itself. Many tricks
> can be made between the final user interface and the address translation
> framework. Merge them together will not be good to both of them. 
> 
> Hope this answer some of your question.
> 
> Cheers:)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
  2019-10-16  8:34 ` [PATCH v6 2/3] uacce: add uacce driver Zhangfei Gao
  2019-10-16 17:28   ` Jean-Philippe Brucker
@ 2019-10-22 18:49   ` Jerome Glisse
       [not found]     ` <20191024064129.GB17723@kllp10>
  2019-10-25  7:01     ` zhangfei
  1 sibling, 2 replies; 15+ messages in thread
From: Jerome Glisse @ 2019-10-22 18:49 UTC (permalink / raw)
  To: Zhangfei Gao
  Cc: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, jean-philippe, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang, Zaibo Xu,
	linux-kernel, linux-crypto, Kenneth Lee, linux-accelerators

On Wed, Oct 16, 2019 at 04:34:32PM +0800, Zhangfei Gao wrote:
> From: Kenneth Lee <liguozhu@hisilicon.com>
> 
> Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
> provide Shared Virtual Addressing (SVA) between accelerators and processes.
> So accelerator can access any data structure of the main cpu.
> This differs from the data sharing between cpu and io device, which share
> data content rather than address.
> Since unified address, hardware and user space of process can share the
> same virtual address in the communication.
> 
> Uacce create a chrdev for every registration, the queue is allocated to
> the process when the chrdev is opened. Then the process can access the
> hardware resource by interact with the queue file. By mmap the queue
> file space to user space, the process can directly put requests to the
> hardware without syscall to the kernel space.

You need to remove all API that is not use by your first driver as
it will most likely bit rot without users. It is way better to add
things when a driver start to make use of it.

I am still not convince of the value of adding a new framework here
with only a single device as an example. It looks similar to some of
the fpga devices. Saddly because framework layering is not something
that exist i guess inventing a new framework is the only answer when
you can not quite fit into an existing one.

More fundamental question is why do you need to change the IOMMU
domain of the device ? I do not see any reason for that unless the
PASID has some restriction on ARM that i do not know of.

I do have multiple comments and point out various serious issues
below.

As it is from my POV it is a NAK. Note that i am not opposing of
adding a new framework, just that you need to trim things down
to what is use by your first driver and you also need to address
the various issues i point out below.

Cheers,
Jérôme

> 
> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> Signed-off-by: Zaibo Xu <xuzaibo@huawei.com>
> Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
> Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
> ---
>  Documentation/ABI/testing/sysfs-driver-uacce |  65 ++
>  drivers/misc/Kconfig                         |   1 +
>  drivers/misc/Makefile                        |   1 +
>  drivers/misc/uacce/Kconfig                   |  13 +
>  drivers/misc/uacce/Makefile                  |   2 +
>  drivers/misc/uacce/uacce.c                   | 995 +++++++++++++++++++++++++++
>  include/linux/uacce.h                        | 168 +++++
>  include/uapi/misc/uacce/uacce.h              |  41 ++
>  8 files changed, 1286 insertions(+)
>  create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce
>  create mode 100644 drivers/misc/uacce/Kconfig
>  create mode 100644 drivers/misc/uacce/Makefile
>  create mode 100644 drivers/misc/uacce/uacce.c
>  create mode 100644 include/linux/uacce.h
>  create mode 100644 include/uapi/misc/uacce/uacce.h
> 

[...]

> diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
> new file mode 100644
> index 0000000..534ddc3
> --- /dev/null
> +++ b/drivers/misc/uacce/uacce.c
> @@ -0,0 +1,995 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +#include <linux/compat.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/file.h>
> +#include <linux/irqdomain.h>
> +#include <linux/module.h>
> +#include <linux/poll.h>
> +#include <linux/sched/signal.h>
> +#include <linux/uacce.h>
> +
> +static struct class *uacce_class;
> +static DEFINE_IDR(uacce_idr);
> +static dev_t uacce_devt;
> +static DEFINE_MUTEX(uacce_mutex);
> +static const struct file_operations uacce_fops;
> +
> +static int uacce_queue_map_qfr(struct uacce_queue *q,
> +			       struct uacce_qfile_region *qfr)
> +{
> +	struct device *dev = q->uacce->pdev;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	int i, j, ret;
> +
> +	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
> +		return 0;
> +
> +	if (!domain)
> +		return -ENODEV;
> +
> +	for (i = 0; i < qfr->nr_pages; i++) {
> +		ret = iommu_map(domain, qfr->iova + i * PAGE_SIZE,
> +				page_to_phys(qfr->pages[i]),
> +				PAGE_SIZE, qfr->prot | q->uacce->prot);
> +		if (ret)
> +			goto err_with_map_pages;
> +
> +		get_page(qfr->pages[i]);
> +	}
> +
> +	return 0;
> +
> +err_with_map_pages:
> +	for (j = i - 1; j >= 0; j--) {
> +		iommu_unmap(domain, qfr->iova + j * PAGE_SIZE, PAGE_SIZE);
> +		put_page(qfr->pages[j]);
> +	}
> +	return ret;
> +}
> +
> +static void uacce_queue_unmap_qfr(struct uacce_queue *q,
> +				  struct uacce_qfile_region *qfr)
> +{
> +	struct device *dev = q->uacce->pdev;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +	int i;
> +
> +	if (!domain || !qfr)
> +		return;
> +
> +	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
> +		return;
> +
> +	for (i = qfr->nr_pages - 1; i >= 0; i--) {
> +		iommu_unmap(domain, qfr->iova + i * PAGE_SIZE, PAGE_SIZE);
> +		put_page(qfr->pages[i]);
> +	}
> +}
> +
> +static int uacce_qfr_alloc_pages(struct uacce_qfile_region *qfr)
> +{
> +	int i, j;
> +
> +	qfr->pages = kcalloc(qfr->nr_pages, sizeof(*qfr->pages), GFP_ATOMIC);
> +	if (!qfr->pages)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < qfr->nr_pages; i++) {
> +		qfr->pages[i] = alloc_page(GFP_ATOMIC | __GFP_ZERO);
> +		if (!qfr->pages[i])
> +			goto err_with_pages;
> +	}
> +
> +	return 0;
> +
> +err_with_pages:
> +	for (j = i - 1; j >= 0; j--)
> +		put_page(qfr->pages[j]);
> +
> +	kfree(qfr->pages);
> +	return -ENOMEM;
> +}
> +
> +static void uacce_qfr_free_pages(struct uacce_qfile_region *qfr)
> +{
> +	int i;
> +
> +	for (i = 0; i < qfr->nr_pages; i++)
> +		put_page(qfr->pages[i]);
> +
> +	kfree(qfr->pages);
> +}
> +
> +static inline int uacce_queue_mmap_qfr(struct uacce_queue *q,
> +				       struct uacce_qfile_region *qfr,
> +				       struct vm_area_struct *vma)
> +{
> +	int i, ret;
> +
> +	for (i = 0; i < qfr->nr_pages; i++) {
> +		ret = remap_pfn_range(vma, vma->vm_start + (i << PAGE_SHIFT),
> +				      page_to_pfn(qfr->pages[i]), PAGE_SIZE,
> +				      vma->vm_page_prot);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static struct uacce_qfile_region *
> +uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma,
> +		    enum uacce_qfrt type, unsigned int flags)
> +{
> +	struct uacce_qfile_region *qfr;
> +	struct uacce_device *uacce = q->uacce;
> +	unsigned long vm_pgoff;
> +	int ret = -ENOMEM;
> +
> +	qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC);
> +	if (!qfr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	qfr->type = type;
> +	qfr->flags = flags;
> +	qfr->iova = vma->vm_start;
> +	qfr->nr_pages = vma_pages(vma);
> +
> +	if (vma->vm_flags & VM_READ)
> +		qfr->prot |= IOMMU_READ;
> +
> +	if (vma->vm_flags & VM_WRITE)
> +		qfr->prot |= IOMMU_WRITE;
> +
> +	if (flags & UACCE_QFRF_SELFMT) {
> +		if (!uacce->ops->mmap) {
> +			ret = -EINVAL;
> +			goto err_with_qfr;
> +		}
> +
> +		ret = uacce->ops->mmap(q, vma, qfr);
> +		if (ret)
> +			goto err_with_qfr;
> +		return qfr;
> +	}
> +
> +	/* allocate memory */
> +	if (flags & UACCE_QFRF_DMA) {
> +		qfr->kaddr = dma_alloc_coherent(uacce->pdev,
> +						qfr->nr_pages << PAGE_SHIFT,
> +						&qfr->dma, GFP_KERNEL);
> +		if (!qfr->kaddr) {
> +			ret = -ENOMEM;
> +			goto err_with_qfr;
> +		}
> +	} else {
> +		ret = uacce_qfr_alloc_pages(qfr);
> +		if (ret)
> +			goto err_with_qfr;
> +	}
> +
> +	/* map to device */
> +	ret = uacce_queue_map_qfr(q, qfr);
> +	if (ret)
> +		goto err_with_pages;
> +
> +	/* mmap to user space */
> +	if (flags & UACCE_QFRF_MMAP) {
> +		if (flags & UACCE_QFRF_DMA) {
> +			/* dma_mmap_coherent() requires vm_pgoff as 0
> +			 * restore vm_pfoff to initial value for mmap()
> +			 */

I would argue that the dma_mmap_coherent() is not the right function
to use here you might be better of doing remap_pfn_range() on your
own.

Working around existing API is not something you want to do, it can
easily break and it makes it harder for people who want to update that
API to not break anyone.

> +			vm_pgoff = vma->vm_pgoff;
> +			vma->vm_pgoff = 0;
> +			ret = dma_mmap_coherent(uacce->pdev, vma, qfr->kaddr,
> +						qfr->dma,
> +						qfr->nr_pages << PAGE_SHIFT);
> +			vma->vm_pgoff = vm_pgoff;
> +		} else {
> +			ret = uacce_queue_mmap_qfr(q, qfr, vma);
> +		}
> +
> +		if (ret)
> +			goto err_with_mapped_qfr;
> +	}
> +
> +	return qfr;
> +
> +err_with_mapped_qfr:
> +	uacce_queue_unmap_qfr(q, qfr);
> +err_with_pages:
> +	if (flags & UACCE_QFRF_DMA)
> +		dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
> +				  qfr->kaddr, qfr->dma);
> +	else
> +		uacce_qfr_free_pages(qfr);
> +err_with_qfr:
> +	kfree(qfr);
> +
> +	return ERR_PTR(ret);
> +}
> +
> +static void uacce_destroy_region(struct uacce_queue *q,
> +				 struct uacce_qfile_region *qfr)
> +{
> +	struct uacce_device *uacce = q->uacce;
> +
> +	if (qfr->flags & UACCE_QFRF_DMA) {
> +		dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
> +				  qfr->kaddr, qfr->dma);
> +	} else if (qfr->pages) {
> +		if (qfr->flags & UACCE_QFRF_KMAP && qfr->kaddr) {
> +			vunmap(qfr->kaddr);
> +			qfr->kaddr = NULL;
> +		}
> +
> +		uacce_qfr_free_pages(qfr);
> +	}
> +	kfree(qfr);
> +}
> +
> +static long uacce_cmd_share_qfr(struct uacce_queue *tgt, int fd)

It would be nice to comment what this function does, AFAICT it tries
to share a region uacce_qfile_region. Anyway this should be remove
altogether as it is not use by your first driver.

> +{
> +	struct file *filep;
> +	struct uacce_queue *src;
> +	int ret = -EINVAL;
> +
> +	mutex_lock(&uacce_mutex);
> +
> +	if (tgt->state != UACCE_Q_STARTED)
> +		goto out_with_lock;
> +
> +	filep = fget(fd);
> +	if (!filep)
> +		goto out_with_lock;
> +
> +	if (filep->f_op != &uacce_fops)
> +		goto out_with_fd;
> +
> +	src = filep->private_data;
> +	if (!src)
> +		goto out_with_fd;
> +
> +	if (tgt->uacce->flags & UACCE_DEV_SVA)
> +		goto out_with_fd;
> +
> +	if (!src->qfrs[UACCE_QFRT_SS] || tgt->qfrs[UACCE_QFRT_SS])
> +		goto out_with_fd;
> +
> +	ret = uacce_queue_map_qfr(tgt, src->qfrs[UACCE_QFRT_SS]);
> +	if (ret)
> +		goto out_with_fd;
> +
> +	tgt->qfrs[UACCE_QFRT_SS] = src->qfrs[UACCE_QFRT_SS];
> +	list_add(&tgt->list, &src->qfrs[UACCE_QFRT_SS]->qs);

This list_add() seems bogus as the src->qfrs would already be
on a list so you are corrupting the list it is on.

> +
> +out_with_fd:
> +	fput(filep);
> +out_with_lock:
> +	mutex_unlock(&uacce_mutex);
> +	return ret;
> +}

[...]

> +static long uacce_fops_unl_ioctl(struct file *filep,
> +				 unsigned int cmd, unsigned long arg)

You need to documents properly all ioctl and also you need to
remove those that are not use by your first driver. They will
just bit rot as we do not know if they will ever be use.

> +{
> +	struct uacce_queue *q = filep->private_data;
> +	struct uacce_device *uacce = q->uacce;
> +
> +	switch (cmd) {
> +	case UACCE_CMD_SHARE_SVAS:
> +		return uacce_cmd_share_qfr(q, arg);
> +
> +	case UACCE_CMD_START:
> +		return uacce_start_queue(q);
> +
> +	case UACCE_CMD_PUT_Q:
> +		return uacce_put_queue(q);
> +
> +	default:
> +		if (!uacce->ops->ioctl)
> +			return -EINVAL;
> +
> +		return uacce->ops->ioctl(q, cmd, arg);
> +	}
> +}
> +

[...]

> +
> +static int uacce_dev_open_check(struct uacce_device *uacce)
> +{
> +	if (uacce->flags & UACCE_DEV_SVA)
> +		return 0;
> +
> +	/*
> +	 * The device can be opened once if it does not support pasid
> +	 */
> +	if (kref_read(&uacce->cdev->kobj.kref) > 2)
> +		return -EBUSY;

You do not check if the device support pasid so comments does not
match code. Right now code says that you can not open a device more
than once. Also this check is racy there is no lock protecting the
read.

> +
> +	return 0;
> +}
> +
> +static int uacce_fops_open(struct inode *inode, struct file *filep)
> +{
> +	struct uacce_queue *q;
> +	struct iommu_sva *handle = NULL;
> +	struct uacce_device *uacce;
> +	int ret;
> +	int pasid = 0;
> +
> +	uacce = idr_find(&uacce_idr, iminor(inode));
> +	if (!uacce)
> +		return -ENODEV;
> +
> +	if (!try_module_get(uacce->pdev->driver->owner))
> +		return -ENODEV;
> +
> +	ret = uacce_dev_open_check(uacce);
> +	if (ret)
> +		goto out_with_module;
> +
> +	if (uacce->flags & UACCE_DEV_SVA) {
> +		handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL);
> +		if (IS_ERR(handle))
> +			goto out_with_module;
> +		pasid = iommu_sva_get_pasid(handle);
> +	}

The file descriptor can outlive the mm (through fork) what happens
when the mm dies ? Where is the sva_unbind ? Maybe in iommu code.
At very least a comment should be added explaining what happens.

> +
> +	q = kzalloc(sizeof(struct uacce_queue), GFP_KERNEL);
> +	if (!q) {
> +		ret = -ENOMEM;
> +		goto out_with_module;
> +	}
> +
> +	if (uacce->ops->get_queue) {
> +		ret = uacce->ops->get_queue(uacce, pasid, q);
> +		if (ret < 0)
> +			goto out_with_mem;
> +	}
> +
> +	q->pasid = pasid;
> +	q->handle = handle;
> +	q->uacce = uacce;
> +	q->mm = current->mm;
> +	memset(q->qfrs, 0, sizeof(q->qfrs));
> +	INIT_LIST_HEAD(&q->list);
> +	init_waitqueue_head(&q->wait);
> +	filep->private_data = q;
> +	q->state = UACCE_Q_INIT;
> +
> +	return 0;
> +
> +out_with_mem:
> +	kfree(q);
> +out_with_module:
> +	module_put(uacce->pdev->driver->owner);
> +	return ret;
> +}
> +
> +static int uacce_fops_release(struct inode *inode, struct file *filep)
> +{
> +	struct uacce_queue *q = filep->private_data;
> +	struct uacce_qfile_region *qfr;
> +	struct uacce_device *uacce = q->uacce;
> +	bool is_to_free_region;
> +	int free_pages = 0;
> +	int i;
> +
> +	mutex_lock(&uacce_mutex);
> +
> +	if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
> +		uacce->ops->stop_queue(q);
> +
> +	for (i = 0; i < UACCE_QFRT_MAX; i++) {
> +		qfr = q->qfrs[i];
> +		if (!qfr)
> +			continue;
> +
> +		is_to_free_region = false;
> +		uacce_queue_unmap_qfr(q, qfr);
> +		if (i == UACCE_QFRT_SS) {
> +			list_del(&q->list);
> +			if (list_empty(&qfr->qs))
> +				is_to_free_region = true;
> +		} else
> +			is_to_free_region = true;
> +
> +		if (is_to_free_region) {
> +			free_pages += qfr->nr_pages;
> +			uacce_destroy_region(q, qfr);
> +		}
> +
> +		qfr = NULL;
> +	}
> +
> +	if (current->mm == q->mm) {
> +		down_write(&q->mm->mmap_sem);
> +		q->mm->data_vm -= free_pages;
> +		up_write(&q->mm->mmap_sem);

This is bogus you do not get any reference on the mm through
mmgrab() so there is nothing protection the q->mm from being
release. Note that you do not want to do mmgrab() in open as
the file descriptor can outlive the mm.

> +	}
> +
> +	if (uacce->flags & UACCE_DEV_SVA)
> +		iommu_sva_unbind_device(q->handle);
> +
> +	if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) &&
> +	     uacce->ops->put_queue)
> +		uacce->ops->put_queue(q);
> +
> +	kfree(q);
> +	mutex_unlock(&uacce_mutex);
> +
> +	module_put(uacce->pdev->driver->owner);

As the file can outlive the process it might also outlive the module
maybe you want to keep a reference on the module as part of the region
and release it in uacce_destroy_region()

> +
> +	return 0;
> +}
> +
> +static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
> +{
> +	struct uacce_queue *q = filep->private_data;
> +	struct uacce_device *uacce = q->uacce;
> +	struct uacce_qfile_region *qfr;
> +	enum uacce_qfrt type = 0;
> +	unsigned int flags = 0;
> +	int ret;
> +
> +	if (vma->vm_pgoff < UACCE_QFRT_MAX)
> +		type = vma->vm_pgoff;
> +
> +	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;

Don't you also want VM_WIPEONFORK ?

> +
> +	mutex_lock(&uacce_mutex);
> +
> +	/* fixme: if the region need no pages, we don't need to check it */
> +	if (q->mm->data_vm + vma_pages(vma) >
> +	    rlimit(RLIMIT_DATA) >> PAGE_SHIFT) {
> +		ret = -ENOMEM;
> +		goto out_with_lock;
> +	}
> +
> +	if (q->qfrs[type]) {
> +		ret = -EBUSY;

What about -EEXIST ? That test checks if a region of given
type already exist for the uacce_queue which is private to
that file descriptor. So it means that userspace which did
open the file is trying to create again the same region type
which already exist.

> +		goto out_with_lock;
> +	}
> +
> +	switch (type) {
> +	case UACCE_QFRT_MMIO:
> +		flags = UACCE_QFRF_SELFMT;
> +		break;
> +
> +	case UACCE_QFRT_SS:
> +		if (q->state != UACCE_Q_STARTED) {
> +			ret = -EINVAL;
> +			goto out_with_lock;
> +		}
> +
> +		if (uacce->flags & UACCE_DEV_SVA) {
> +			ret = -EINVAL;
> +			goto out_with_lock;
> +		}
> +
> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
> +
> +		break;
> +
> +	case UACCE_QFRT_DKO:
> +		if (uacce->flags & UACCE_DEV_SVA) {
> +			ret = -EINVAL;
> +			goto out_with_lock;
> +		}
> +
> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_KMAP;
> +
> +		break;
> +
> +	case UACCE_QFRT_DUS:
> +		if (uacce->flags & UACCE_DEV_SVA) {
> +			flags = UACCE_QFRF_SELFMT;
> +			break;
> +		}
> +
> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
> +		break;
> +
> +	default:
> +		WARN_ON(&uacce->dev);
> +		break;
> +	}
> +
> +	qfr = uacce_create_region(q, vma, type, flags);
> +	if (IS_ERR(qfr)) {
> +		ret = PTR_ERR(qfr);
> +		goto out_with_lock;
> +	}
> +	q->qfrs[type] = qfr;
> +
> +	if (type == UACCE_QFRT_SS) {
> +		INIT_LIST_HEAD(&qfr->qs);
> +		list_add(&q->list, &q->qfrs[type]->qs);
> +	}
> +
> +	mutex_unlock(&uacce_mutex);
> +
> +	if (qfr->pages)
> +		q->mm->data_vm += qfr->nr_pages;

The mm->data_vm fields is protected by the mmap_sem taken in write
mode AFAIR so what you are doing here is unsafe.

> +
> +	return 0;
> +
> +out_with_lock:
> +	mutex_unlock(&uacce_mutex);
> +	return ret;
> +}
> +

[...]

> +/* Borrowed from VFIO to fix msi translation */
> +static bool uacce_iommu_has_sw_msi(struct iommu_group *group,
> +				   phys_addr_t *base)

I fail to see why you need this in a common framework this
seems to be specific to a device.

> +{
> +	struct list_head group_resv_regions;
> +	struct iommu_resv_region *region, *next;
> +	bool ret = false;
> +
> +	INIT_LIST_HEAD(&group_resv_regions);
> +	iommu_get_group_resv_regions(group, &group_resv_regions);
> +	list_for_each_entry(region, &group_resv_regions, list) {
> +		/*
> +		 * The presence of any 'real' MSI regions should take
> +		 * precedence over the software-managed one if the
> +		 * IOMMU driver happens to advertise both types.
> +		 */
> +		if (region->type == IOMMU_RESV_MSI) {
> +			ret = false;
> +			break;
> +		}
> +
> +		if (region->type == IOMMU_RESV_SW_MSI) {
> +			*base = region->start;
> +			ret = true;
> +		}
> +	}
> +
> +	list_for_each_entry_safe(region, next, &group_resv_regions, list)
> +		kfree(region);
> +
> +	return ret;
> +}
> +
> +static int uacce_set_iommu_domain(struct uacce_device *uacce)
> +{
> +	struct iommu_domain *domain;
> +	struct iommu_group *group;
> +	struct device *dev = uacce->pdev;
> +	bool resv_msi;
> +	phys_addr_t resv_msi_base = 0;
> +	int ret;
> +
> +	if (uacce->flags & UACCE_DEV_SVA)
> +		return 0;
> +
> +	/* allocate and attach an unmanaged domain */
> +	domain = iommu_domain_alloc(uacce->pdev->bus);
> +	if (!domain) {
> +		dev_err(&uacce->dev, "cannot get domain for iommu\n");
> +		return -ENODEV;
> +	}
> +
> +	ret = iommu_attach_device(domain, uacce->pdev);
> +	if (ret)
> +		goto err_with_domain;
> +
> +	if (iommu_capable(dev->bus, IOMMU_CAP_CACHE_COHERENCY))
> +		uacce->prot |= IOMMU_CACHE;
> +
> +	group = iommu_group_get(dev);
> +	if (!group) {
> +		ret = -EINVAL;
> +		goto err_with_domain;
> +	}
> +
> +	resv_msi = uacce_iommu_has_sw_msi(group, &resv_msi_base);
> +	iommu_group_put(group);
> +
> +	if (resv_msi) {
> +		if (!irq_domain_check_msi_remap() &&
> +		    !iommu_capable(dev->bus, IOMMU_CAP_INTR_REMAP)) {
> +			dev_warn(dev, "No interrupt remapping support!");
> +			ret = -EPERM;
> +			goto err_with_domain;
> +		}
> +
> +		ret = iommu_get_msi_cookie(domain, resv_msi_base);
> +		if (ret)
> +			goto err_with_domain;
> +	}
> +
> +	return 0;
> +
> +err_with_domain:
> +	iommu_domain_free(domain);
> +	return ret;
> +}
> +
> +static void uacce_unset_iommu_domain(struct uacce_device *uacce)
> +{
> +	struct iommu_domain *domain;
> +
> +	if (uacce->flags & UACCE_DEV_SVA)
> +		return;
> +
> +	domain = iommu_get_domain_for_dev(uacce->pdev);
> +	if (!domain) {
> +		dev_err(&uacce->dev, "bug: no domain attached to device\n");
> +		return;
> +	}
> +
> +	iommu_detach_device(domain, uacce->pdev);
> +	iommu_domain_free(domain);
> +}
> +
> +/**
> + * uacce_register - register an accelerator
> + * @parent: pointer of uacce parent device
> + * @interface: pointer of uacce_interface for register
> + */
> +struct uacce_device *uacce_register(struct device *parent,
> +				    struct uacce_interface *interface)
> +{
> +	int ret;
> +	struct uacce_device *uacce;
> +	unsigned int flags = interface->flags;
> +
> +	uacce = kzalloc(sizeof(struct uacce_device), GFP_KERNEL);
> +	if (!uacce)
> +		return ERR_PTR(-ENOMEM);
> +
> +	if (flags & UACCE_DEV_SVA) {
> +		ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA);
> +		if (ret)
> +			flags &= ~UACCE_DEV_SVA;
> +	}
> +
> +	uacce->pdev = parent;
> +	uacce->flags = flags;
> +	uacce->ops = interface->ops;
> +
> +	ret = uacce_set_iommu_domain(uacce);
> +	if (ret)
> +		goto err_free;

Why do you need to change the IOMMU domain ? This is orthogonal to
what you are trying to achieve. Domain has nothing to do with SVA
or userspace queue (at least not on x86 AFAIK).


> +
> +	mutex_lock(&uacce_mutex);
> +
> +	ret = idr_alloc(&uacce_idr, uacce, 0, 0, GFP_KERNEL);
> +	if (ret < 0)
> +		goto err_with_lock;
> +
> +	uacce->cdev = cdev_alloc();
> +	uacce->cdev->ops = &uacce_fops;
> +	uacce->dev_id = ret;
> +	uacce->cdev->owner = THIS_MODULE;
> +	device_initialize(&uacce->dev);
> +	uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
> +	uacce->dev.class = uacce_class;
> +	uacce->dev.groups = uacce_dev_groups;
> +	uacce->dev.parent = uacce->pdev;
> +	uacce->dev.release = uacce_release;
> +	dev_set_name(&uacce->dev, "%s-%d", interface->name, uacce->dev_id);
> +	ret = cdev_device_add(uacce->cdev, &uacce->dev);
> +	if (ret)
> +		goto err_with_idr;
> +
> +	mutex_unlock(&uacce_mutex);
> +
> +	return uacce;
> +
> +err_with_idr:
> +	idr_remove(&uacce_idr, uacce->dev_id);
> +err_with_lock:
> +	mutex_unlock(&uacce_mutex);
> +	uacce_unset_iommu_domain(uacce);
> +err_free:
> +	if (flags & UACCE_DEV_SVA)
> +		iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
> +	kfree(uacce);
> +	return ERR_PTR(ret);
> +}
> +EXPORT_SYMBOL_GPL(uacce_register);

[...]

> diff --git a/include/linux/uacce.h b/include/linux/uacce.h
> new file mode 100644
> index 0000000..8ce0640
> --- /dev/null
> +++ b/include/linux/uacce.h
> @@ -0,0 +1,168 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +#ifndef _LINUX_UACCE_H
> +#define _LINUX_UACCE_H
> +
> +#include <linux/cdev.h>
> +#include <uapi/misc/uacce/uacce.h>
> +
> +#define UACCE_NAME		"uacce"
> +
> +struct uacce_queue;
> +struct uacce_device;
> +
> +/* uacce queue file flag, requires different operation */
> +#define UACCE_QFRF_MAP		BIT(0)	/* map to current queue */
> +#define UACCE_QFRF_MMAP		BIT(1)	/* map to user space */
> +#define UACCE_QFRF_KMAP		BIT(2)	/* map to kernel space */
> +#define UACCE_QFRF_DMA		BIT(3)	/* use dma api for the region */
> +#define UACCE_QFRF_SELFMT	BIT(4)	/* self maintained qfr */
> +
> +/**
> + * struct uacce_qfile_region - structure of queue file region
> + * @type: type of the qfr
> + * @iova: iova share between user and device space
> + * @pages: pages pointer of the qfr memory
> + * @nr_pages: page numbers of the qfr memory
> + * @prot: qfr protection flag
> + * @flags: flags of qfr
> + * @qs: list sharing the same region, for ss region
> + * @kaddr: kernel addr of the qfr
> + * @dma: dma address, if created by dma api
> + */
> +struct uacce_qfile_region {
> +	enum uacce_qfrt type;
> +	unsigned long iova;
> +	struct page **pages;
> +	u32 nr_pages;
> +	u32 prot;
> +	u32 flags;
> +	struct list_head qs;
> +	void *kaddr;
> +	dma_addr_t dma;
> +};
> +
> +/**
> + * struct uacce_ops - uacce device operations
> + * @get_available_instances:  get available instances left of the device
> + * @get_queue: get a queue from the device
> + * @put_queue: free a queue to the device
> + * @start_queue: make the queue start work after get_queue
> + * @stop_queue: make the queue stop work before put_queue
> + * @is_q_updated: check whether the task is finished
> + * @mask_notify: mask the task irq of queue
> + * @mmap: mmap addresses of queue to user space
> + * @reset: reset the uacce device
> + * @reset_queue: reset the queue
> + * @ioctl: ioctl for user space users of the queue
> + */
> +struct uacce_ops {
> +	int (*get_available_instances)(struct uacce_device *uacce);
> +	int (*get_queue)(struct uacce_device *uacce, unsigned long arg,
> +			 struct uacce_queue *q);
> +	void (*put_queue)(struct uacce_queue *q);
> +	int (*start_queue)(struct uacce_queue *q);
> +	void (*stop_queue)(struct uacce_queue *q);
> +	int (*is_q_updated)(struct uacce_queue *q);
> +	void (*mask_notify)(struct uacce_queue *q, int event_mask);
> +	int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma,
> +		    struct uacce_qfile_region *qfr);
> +	int (*reset)(struct uacce_device *uacce);
> +	int (*reset_queue)(struct uacce_queue *q);
> +	long (*ioctl)(struct uacce_queue *q, unsigned int cmd,
> +		      unsigned long arg);
> +};
> +
> +/**
> + * struct uacce_interface
> + * @name: the uacce device name.  Will show up in sysfs
> + * @flags: uacce device attributes
> + * @ops: pointer to the struct uacce_ops
> + *
> + * This structure is used for the uacce_register()
> + */
> +struct uacce_interface {
> +	char name[32];

You should add a define for the maximum lenght of name.

> +	unsigned int flags;

Should be enum uacce_dev_flag not unsigned int and that
enum should be defined above and not in uAPI see comments
i made next to that enum.

> +	struct uacce_ops *ops;
> +};
> +
> +enum uacce_q_state {
> +	UACCE_Q_INIT,
> +	UACCE_Q_STARTED,
> +	UACCE_Q_ZOMBIE,
> +};
> +
> +/**
> + * struct uacce_queue
> + * @uacce: pointer to uacce
> + * @priv: private pointer
> + * @wait: wait queue head
> + * @pasid: pasid of the queue
> + * @handle: iommu_sva handle return from iommu_sva_bind_device
> + * @list: share list for qfr->qs
> + * @mm: current->mm
> + * @qfrs: pointer of qfr regions
> + * @state: queue state machine
> + */
> +struct uacce_queue {
> +	struct uacce_device *uacce;
> +	void *priv;
> +	wait_queue_head_t wait;
> +	int pasid;
> +	struct iommu_sva *handle;
> +	struct list_head list;
> +	struct mm_struct *mm;
> +	struct uacce_qfile_region *qfrs[UACCE_QFRT_MAX];
> +	enum uacce_q_state state;
> +};
> +
> +/**
> + * struct uacce_device
> + * @algs: supported algorithms
> + * @api_ver: api version
> + * @qf_pg_size: page size of the queue file regions
> + * @ops: pointer to the struct uacce_ops
> + * @pdev: pointer to the parent device
> + * @is_vf: whether virtual function
> + * @flags: uacce attributes
> + * @dev_id: id of the uacce device
> + * @prot: uacce protection flag
> + * @cdev: cdev of the uacce
> + * @dev: dev of the uacce
> + * @priv: private pointer of the uacce
> + */
> +struct uacce_device {
> +	const char *algs;
> +	const char *api_ver;
> +	unsigned long qf_pg_size[UACCE_QFRT_MAX];
> +	struct uacce_ops *ops;
> +	struct device *pdev;
> +	bool is_vf;
> +	u32 flags;
> +	u32 dev_id;
> +	u32 prot;
> +	struct cdev *cdev;
> +	struct device dev;
> +	void *priv;
> +};
> +
> +#if IS_ENABLED(CONFIG_UACCE)
> +
> +struct uacce_device *uacce_register(struct device *parent,
> +				    struct uacce_interface *interface);
> +void uacce_unregister(struct uacce_device *uacce);
> +
> +#else /* CONFIG_UACCE */
> +
> +static inline
> +struct uacce_device *uacce_register(struct device *parent,
> +				    struct uacce_interface *interface)
> +{
> +	return ERR_PTR(-ENODEV);
> +}
> +
> +static inline void uacce_unregister(struct uacce_device *uacce) {}
> +
> +#endif /* CONFIG_UACCE */
> +
> +#endif /* _LINUX_UACCE_H */
> diff --git a/include/uapi/misc/uacce/uacce.h b/include/uapi/misc/uacce/uacce.h
> new file mode 100644
> index 0000000..c859668
> --- /dev/null
> +++ b/include/uapi/misc/uacce/uacce.h
> @@ -0,0 +1,41 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +#ifndef _UAPIUUACCE_H
> +#define _UAPIUUACCE_H
> +
> +#include <linux/types.h>
> +#include <linux/ioctl.h>
> +
> +#define UACCE_CMD_SHARE_SVAS	_IO('W', 0)
> +#define UACCE_CMD_START		_IO('W', 1)
> +#define UACCE_CMD_PUT_Q		_IO('W', 2)
> +
> +/**
> + * enum uacce_dev_flag: Device flags:
> + * @UACCE_DEV_SHARE_DOMAIN: no PASID, can share sva for one process
> + * @UACCE_DEV_SVA: Shared Virtual Addresses
> + *		   Support PASID
> + *		   Support device page fault (pcie device) or
> + *		   smmu stall (platform device)
> + */
> +enum uacce_dev_flag {
> +	UACCE_DEV_SHARE_DOMAIN = 0x0,

UACCE_DEV_SHARE_DOMAIN is not use anywhere better do not introduce something
that is not use.


> +	UACCE_DEV_SVA = 0x1,
> +};

More general question why is it part of the UAPI header file ?
To me it seems that those flags are only use internaly to the
kernel and never need to be expose to userspace.

> +
> +/**
> + * enum uacce_qfrt: qfrt type
> + * @UACCE_QFRT_MMIO: device mmio region
> + * @UACCE_QFRT_DKO: device kernel-only region
> + * @UACCE_QFRT_DUS: device user share region
> + * @UACCE_QFRT_SS: static shared memory region
> + * @UACCE_QFRT_MAX: indicate the boundary
> + */

Your first driver only use DUS and MMIO, you should not define
thing that are not even use by the first driver, especialy when
it comes to userspace API.

> +enum uacce_qfrt {
> +	UACCE_QFRT_MMIO = 0,
> +	UACCE_QFRT_DKO = 1,
> +	UACCE_QFRT_DUS = 2,
> +	UACCE_QFRT_SS = 3,
> +	UACCE_QFRT_MAX = 16,

Isn't 16 bit low ? Do you really need a maximum ? I would not
expose or advertise a maxmimum in this userspace facing header.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
       [not found]     ` <5da9a9cd.1c69fb81.9f8e8.60faSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-23  7:42       ` Jean-Philippe Brucker
  2019-10-23 17:03         ` Jerome Glisse
       [not found]         ` <5db25e56.1c69fb81.4fe57.380cSMTPIN_ADDED_BROKEN@mx.google.com>
  0 siblings, 2 replies; 15+ messages in thread
From: Jean-Philippe Brucker @ 2019-10-23  7:42 UTC (permalink / raw)
  To: zhangfei.gao
  Cc: Zhangfei Gao, Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu,
	jonathan.cameron, grant.likely, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang, linux-accelerators,
	linux-kernel, linux-crypto, Kenneth Lee, Zaibo Xu

On Fri, Oct 18, 2019 at 08:01:44PM +0800, zhangfei.gao@foxmail.com wrote:
> > More generally, it would be nice to use the DMA API when SVA isn't
> > supported, instead of manually allocating and mapping memory with
> > iommu_map(). Do we only handcraft these functions in order to have VA ==
> > IOVA?  On its own it doesn't seem like a strong enough reason to avoid the
> > DMA API.
> Here we use unmanaged domain to prevent va conflict with iova.
> The target is still to build shared virtual address though SVA is not
> supported.

If SVA isn't supported, having VA == IOVA looks nice but isn't
particularly useful. We could instead require that, if SVA isn't
supported, userspace handles VA and IOVA separately for any DMA region.

Enforcing VA == IOVA adds some unnecessary complexity to this module. In
addition to the special case for software MSIs that is already there
(uacce_iommu_has_sw_msi), it's also not guaranteed that the whole VA space
is representable with IOVAs, you might need to poke holes in the IOVA
space for reserved regions (See iommu.*resv). For example VFIO checks that
the IOVA requested by userspace doesn't fall into a reserved range (see
iova_list in vfio_iommu_type1.c). It also exports to userspace a list of
possible IOVAs through VFIO_IOMMU_GET_INFO.

Letting the DMA API allocate addresses would be simpler, since it already
deals with resv regions and software MSI.

> The iova from dma api can be same with va, and device can not distinguish
> them.
> So here we borrow va from user space and iommu_map to device, and the va
> becomes iova.
> Since this iova is from user space, so no conflict.
> Then dma api can not be used in this case.
> 
> drivers/vfio/vfio_iommu_type1.c also use iommu_domain_alloc.

VFIO needs to let userspace pick its IOVA, because the IOVA space is
generally managed by a guest OS. In my opinion this is a baggage that
uacce doesn't need.

If we only supported the DMA API and not unmanaged IOMMU domains,
userspace would need to do a little bit more work by differentiating
between VA and DMA addresses, but that could be abstracted into the uacce
library and it would make the kernel module a lot simpler.

[...]
> > I wish the SVA and !SVA paths were less interleaved. Both models are
> > fundamentally different:
> > 
> > * Without SVA you cannot share the device between multiple processes. All
> >    DMA mappings are in the "main", non-PASID address space of the device.
> > 
> >    Note that process isolation without SVA could be achieved with the
> >    auxiliary domains IOMMU API (introduced primarily for vfio-mdev) but
> >    this is not the model chosen here.
> Does pasid has to be supported for this case?

Yes, you do need PASID support for auxiliary domains, but not PRI/Stall.

[...]
> > > +	/* allocate memory */
> > > +	if (flags & UACCE_QFRF_DMA) {
> > At the moment UACCE_QFRF_DMA is never set, so there is a lot of unused and
> > possibly untested code in this file. I think it would be simpler to choose
> > between either DMA API or unmanaged IOMMU domains and stick with it. As
> > said before, I'd prefer DMA API.
> UACCE_QFRF_DMA is using dma api, it used this for quick method, though it
> can not prevent va conflict.
> We use an ioctl to get iova of the dma buffer.
> Since the interface is not standard, we kept the interface and verified
> internally.

As above, it's probably worth exploring this method further for !SVA.

> > > +		qfr->kaddr = dma_alloc_coherent(uacce->pdev,
> > > +						qfr->nr_pages << PAGE_SHIFT,
> > > +						&qfr->dma, GFP_KERNEL);
> > > +		if (!qfr->kaddr) {
> > > +			ret = -ENOMEM;
> > > +			goto err_with_qfr;
> > > +		}
> > > +	} else {
> > > +		ret = uacce_qfr_alloc_pages(qfr);
> > > +		if (ret)
> > > +			goto err_with_qfr;
> > > +	}
> > > +
> > > +	/* map to device */
> > > +	ret = uacce_queue_map_qfr(q, qfr);
> > Worth moving into the else above.
> The idea here is a, map to device, b, map to user space.

Yes but dma_alloc_coherent() creates the IOMMU mapping, and
uacce_queue_map_qfr()'s only task is to create the IOMMU mapping when the
DMA API isn't in use, so you could move this call up, right after
uacce_qfr_alloc_pages().

[...]
> > > +	q->state = UACCE_Q_ZOMBIE;
> > Since the PUT_Q ioctl makes the queue unrecoverable, why should userspace
> > invoke it instead of immediately calling close()?
> We found close does not release resource immediately, which may cause issue
> when re-open again
> when all queues are used.

I think the only way to fix that problem is to avoid reallocating the
resources until they are released, because we can't count on userspace to
always call the PUT_Q ioctl. Sometimes the program will crash before that.

> > > +static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
> > > +{
> > > +	struct uacce_queue *q = filep->private_data;
> > > +	struct uacce_device *uacce = q->uacce;
> > > +	struct uacce_qfile_region *qfr;
> > > +	enum uacce_qfrt type = 0;
> > > +	unsigned int flags = 0;
> > > +	int ret;
> > > +
> > > +	if (vma->vm_pgoff < UACCE_QFRT_MAX)
> > > +		type = vma->vm_pgoff;
> > > +
> > > +	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> > > +
> > > +	mutex_lock(&uacce_mutex);

By the way, lockdep detects a possible unsafe locking scenario here,
because we're taking the uacce_mutex even though mmap called us with the
mmap_sem held for writing. Conversely uacce_fops_release() takes the
mmap_sem for writing while holding the uacce_mutex. I think it can be
fixed easily, if we simply remove the use of mmap_sem in
uacce_fops_release(), since it's only taken to do some accounting which
doesn't look right.

However, a similar but more complex locking issue comes from the current
use of iommu_sva_bind/unbind_device():

uacce_fops_open:
 iommu_sva_unbind_device()
  iommu_sva_bind_group()	[iommu_group->mutex]
    mmu_notifier_get()		[mmap_sem]

uacce_fops_mmap:		[mmap_sem]
				[uacce_mutex]

uacce_fops_release:
				[uacce_mutex]
  iommu_sva_unbind_device()	[iommu_group->mutex]

This circular dependency can be broken by calling iommu_sva_unbind_device()
outside of uacce_mutex, but I think it's worth reworking the queue locking
scheme a little and use fine-grained locking for the queue state.

Something else I noticed is uacce_idr isn't currently protected. The IDR
API expected the caller to use its own locking scheme. You could replace
it with an xarray, which I think is preferred to IDR now and provides a
xa_lock.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
  2019-10-16 17:28   ` Jean-Philippe Brucker
       [not found]     ` <5da9a9cd.1c69fb81.9f8e8.60faSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-23 16:58     ` Jerome Glisse
       [not found]       ` <5db257c6.1c69fb81.bfe34.a4afSMTPIN_ADDED_BROKEN@mx.google.com>
  1 sibling, 1 reply; 15+ messages in thread
From: Jerome Glisse @ 2019-10-23 16:58 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Zhangfei Gao, francois.ozog, Herbert Xu, Arnd Bergmann,
	Greg Kroah-Hartman, Zaibo Xu, ilias.apalodimas, linux-kernel,
	linux-crypto, Wangzhou, grant.likely, haojian . zhuang,
	Kenneth Lee, linux-accelerators, kenneth-lee-2012

On Wed, Oct 16, 2019 at 07:28:02PM +0200, Jean-Philippe Brucker wrote:
[...]

> > +static struct uacce_qfile_region *
> > +uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma,
> > +		    enum uacce_qfrt type, unsigned int flags)
> > +{
> > +	struct uacce_qfile_region *qfr;
> > +	struct uacce_device *uacce = q->uacce;
> > +	unsigned long vm_pgoff;
> > +	int ret = -ENOMEM;
> > +
> > +	qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC);
> > +	if (!qfr)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	qfr->type = type;
> > +	qfr->flags = flags;
> > +	qfr->iova = vma->vm_start;
> > +	qfr->nr_pages = vma_pages(vma);
> > +
> > +	if (vma->vm_flags & VM_READ)
> > +		qfr->prot |= IOMMU_READ;
> > +
> > +	if (vma->vm_flags & VM_WRITE)
> > +		qfr->prot |= IOMMU_WRITE;
> > +
> > +	if (flags & UACCE_QFRF_SELFMT) {
> > +		if (!uacce->ops->mmap) {
> > +			ret = -EINVAL;
> > +			goto err_with_qfr;
> > +		}
> > +
> > +		ret = uacce->ops->mmap(q, vma, qfr);
> > +		if (ret)
> > +			goto err_with_qfr;
> > +		return qfr;
> > +	}
> 
> I wish the SVA and !SVA paths were less interleaved. Both models are
> fundamentally different:
> 
> * Without SVA you cannot share the device between multiple processes. All
>   DMA mappings are in the "main", non-PASID address space of the device.
> 
>   Note that process isolation without SVA could be achieved with the
>   auxiliary domains IOMMU API (introduced primarily for vfio-mdev) but
>   this is not the model chosen here.
> 
> * With SVA you can share the device between multiple processes. But if the
>   process can somehow program its portion of the device to access the main
>   address space, you loose isolation. Only the kernel must be able to
>   program and access the main address space.
> 
> When interleaving both code paths it's easy to make a mistake and loose
> this isolation. Although I think this code is correct, it took me some
> time to understand that we never end up calling dma_alloc or iommu_map
> when using SVA. Might be worth at least adding a check that if
> UACCE_DEV_SVA, then we never end up in the bottom part of this function.

I would go even further, just remove the DMA path as it is not use.
But yes at bare minimum it needs to be completely separate to avoid
confusion.


[...]


> > +static int uacce_fops_open(struct inode *inode, struct file *filep)
> > +{
> > +	struct uacce_queue *q;
> > +	struct iommu_sva *handle = NULL;
> > +	struct uacce_device *uacce;
> > +	int ret;
> > +	int pasid = 0;
> > +
> > +	uacce = idr_find(&uacce_idr, iminor(inode));
> > +	if (!uacce)
> > +		return -ENODEV;
> > +
> > +	if (!try_module_get(uacce->pdev->driver->owner))
> > +		return -ENODEV;
> > +
> > +	ret = uacce_dev_open_check(uacce);
> > +	if (ret)
> > +		goto out_with_module;
> > +
> > +	if (uacce->flags & UACCE_DEV_SVA) {
> > +		handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL);
> > +		if (IS_ERR(handle))
> > +			goto out_with_module;
> > +		pasid = iommu_sva_get_pasid(handle);
> 
> We need to register an mm_exit callback. Once we return, userspace will
> start running jobs on the accelerator. If the process is killed while the
> accelerator is running, the mm_exit callback tells the device driver to
> stop using this PASID (stop_queue()), so that it can be reallocated for
> another process.
> 
> Implementing this with the right locking and ordering can be tricky. I'll
> try to implement the callback and test it on the device this week.

It already exist it is call mmu notifier, you can register an mmu notifier
and get callback once the mm exit.

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
  2019-10-23  7:42       ` Jean-Philippe Brucker
@ 2019-10-23 17:03         ` Jerome Glisse
       [not found]         ` <5db25e56.1c69fb81.4fe57.380cSMTPIN_ADDED_BROKEN@mx.google.com>
  1 sibling, 0 replies; 15+ messages in thread
From: Jerome Glisse @ 2019-10-23 17:03 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: zhangfei.gao, francois.ozog, Herbert Xu, Arnd Bergmann,
	Greg Kroah-Hartman, Zaibo Xu, ilias.apalodimas, linux-kernel,
	linux-crypto, Wangzhou, grant.likely, haojian . zhuang,
	Zhangfei Gao, Kenneth Lee, linux-accelerators, kenneth-lee-2012

On Wed, Oct 23, 2019 at 09:42:27AM +0200, Jean-Philippe Brucker wrote:
> On Fri, Oct 18, 2019 at 08:01:44PM +0800, zhangfei.gao@foxmail.com wrote:

[...]

> > > > +static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
> > > > +{
> > > > +	struct uacce_queue *q = filep->private_data;
> > > > +	struct uacce_device *uacce = q->uacce;
> > > > +	struct uacce_qfile_region *qfr;
> > > > +	enum uacce_qfrt type = 0;
> > > > +	unsigned int flags = 0;
> > > > +	int ret;
> > > > +
> > > > +	if (vma->vm_pgoff < UACCE_QFRT_MAX)
> > > > +		type = vma->vm_pgoff;
> > > > +
> > > > +	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> > > > +
> > > > +	mutex_lock(&uacce_mutex);
> 
> By the way, lockdep detects a possible unsafe locking scenario here,
> because we're taking the uacce_mutex even though mmap called us with the
> mmap_sem held for writing. Conversely uacce_fops_release() takes the
> mmap_sem for writing while holding the uacce_mutex. I think it can be
> fixed easily, if we simply remove the use of mmap_sem in
> uacce_fops_release(), since it's only taken to do some accounting which
> doesn't look right.

I think you need to remove the RLIMIT_DATA accounting altogether. Assume
it is not an issue for now and revisit latter when it becomes one as i
am not sure we want to add this queue memory accounting to RLIMIT_DATA
in the first place. Maybe a memory cgroup. In anycases it is safer to
delay this discussion to latter.

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
       [not found]     ` <20191024064129.GB17723@kllp10>
@ 2019-10-24 14:17       ` Jerome Glisse
  0 siblings, 0 replies; 15+ messages in thread
From: Jerome Glisse @ 2019-10-24 14:17 UTC (permalink / raw)
  To: Kenneth Lee
  Cc: Zhangfei Gao, Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu,
	jonathan.cameron, grant.likely, jean-philippe, ilias.apalodimas,
	francois.ozog, Wangzhou, haojian . zhuang, Zaibo Xu,
	linux-kernel, linux-crypto, Kenneth Lee, linux-accelerators

On Thu, Oct 24, 2019 at 02:41:29PM +0800, Kenneth Lee wrote:
> On Tue, Oct 22, 2019 at 02:49:29PM -0400, Jerome Glisse wrote:
> > Date: Tue, 22 Oct 2019 14:49:29 -0400
> > From: Jerome Glisse <jglisse@redhat.com>
> > To: Zhangfei Gao <zhangfei.gao@linaro.org>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Arnd Bergmann
> >  <arnd@arndb.de>, Herbert Xu <herbert@gondor.apana.org.au>,
> >  jonathan.cameron@huawei.com, grant.likely@arm.com, jean-philippe
> >  <jean-philippe@linaro.org>, ilias.apalodimas@linaro.org,
> >  francois.ozog@linaro.org, kenneth-lee-2012@foxmail.com, Wangzhou
> >  <wangzhou1@hisilicon.com>, "haojian . zhuang" <haojian.zhuang@linaro.org>,
> >  Zaibo Xu <xuzaibo@huawei.com>, linux-kernel@vger.kernel.org,
> >  linux-crypto@vger.kernel.org, Kenneth Lee <liguozhu@hisilicon.com>,
> >  linux-accelerators@lists.ozlabs.org
> > Subject: Re: [PATCH v6 2/3] uacce: add uacce driver
> > Message-ID: <20191022184929.GC5169@redhat.com>
> > 
> > On Wed, Oct 16, 2019 at 04:34:32PM +0800, Zhangfei Gao wrote:
> > > From: Kenneth Lee <liguozhu@hisilicon.com>
> > > 
> > > Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
> > > provide Shared Virtual Addressing (SVA) between accelerators and processes.
> > > So accelerator can access any data structure of the main cpu.
> > > This differs from the data sharing between cpu and io device, which share
> > > data content rather than address.
> > > Since unified address, hardware and user space of process can share the
> > > same virtual address in the communication.
> > > 
> > > Uacce create a chrdev for every registration, the queue is allocated to
> > > the process when the chrdev is opened. Then the process can access the
> > > hardware resource by interact with the queue file. By mmap the queue
> > > file space to user space, the process can directly put requests to the
> > > hardware without syscall to the kernel space.
> > 
> > You need to remove all API that is not use by your first driver as
> > it will most likely bit rot without users. It is way better to add
> > things when a driver start to make use of it.
> 
> Yes. Good point. Thank you:)
> 
> > 
> > I am still not convince of the value of adding a new framework here
> > with only a single device as an example. It looks similar to some of
> > the fpga devices. Saddly because framework layering is not something
> > that exist i guess inventing a new framework is the only answer when
> > you can not quite fit into an existing one.
> > 
> > More fundamental question is why do you need to change the IOMMU
> > domain of the device ? I do not see any reason for that unless the
> > PASID has some restriction on ARM that i do not know of.
> 
> But I think this is the only way. As my understanding, by default, the
> system creates a DMA IOMMU domain for each device behine an IOMMU. If
> you want to call iommu interface directly, we have to rebind the device
> to an unmanaged domain.

Why would you need to call iommu directly ? On some GPUs we do use
PASID and we do not rebind to different domain, we just don't mess
with that. So i do not see any reason to change the domain.

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
  2019-10-22 18:49   ` Jerome Glisse
       [not found]     ` <20191024064129.GB17723@kllp10>
@ 2019-10-25  7:01     ` zhangfei
  1 sibling, 0 replies; 15+ messages in thread
From: zhangfei @ 2019-10-25  7:01 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu, jonathan.cameron,
	grant.likely, jean-philippe, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang, Zaibo Xu,
	linux-kernel, linux-crypto, Kenneth Lee, linux-accelerators

Hi, Jerome

Thanks for the suggestions

On 2019/10/23 上午2:49, Jerome Glisse wrote:
> On Wed, Oct 16, 2019 at 04:34:32PM +0800, Zhangfei Gao wrote:
>> From: Kenneth Lee <liguozhu@hisilicon.com>
>>
>> Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
>> provide Shared Virtual Addressing (SVA) between accelerators and processes.
>> So accelerator can access any data structure of the main cpu.
>> This differs from the data sharing between cpu and io device, which share
>> data content rather than address.
>> Since unified address, hardware and user space of process can share the
>> same virtual address in the communication.
>>
>> Uacce create a chrdev for every registration, the queue is allocated to
>> the process when the chrdev is opened. Then the process can access the
>> hardware resource by interact with the queue file. By mmap the queue
>> file space to user space, the process can directly put requests to the
>> hardware without syscall to the kernel space.
> You need to remove all API that is not use by your first driver as
> it will most likely bit rot without users. It is way better to add
> things when a driver start to make use of it.
>
> I am still not convince of the value of adding a new framework here
> with only a single device as an example. It looks similar to some of
> the fpga devices. Saddly because framework layering is not something
> that exist i guess inventing a new framework is the only answer when
> you can not quite fit into an existing one.
>
> More fundamental question is why do you need to change the IOMMU
> domain of the device ? I do not see any reason for that unless the
> PASID has some restriction on ARM that i do not know of.
>
> I do have multiple comments and point out various serious issues
> below.
>
> As it is from my POV it is a NAK. Note that i am not opposing of
> adding a new framework, just that you need to trim things down
> to what is use by your first driver and you also need to address
> the various issues i point out below.
>
> Cheers,
> Jérôme
>
>> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
>> Signed-off-by: Zaibo Xu <xuzaibo@huawei.com>
>> Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
>> Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
>> ---
>>   Documentation/ABI/testing/sysfs-driver-uacce |  65 ++
>>   drivers/misc/Kconfig                         |   1 +
>>   drivers/misc/Makefile                        |   1 +
>>   drivers/misc/uacce/Kconfig                   |  13 +
>>   drivers/misc/uacce/Makefile                  |   2 +
>>   drivers/misc/uacce/uacce.c                   | 995 +++++++++++++++++++++++++++
>>   include/linux/uacce.h                        | 168 +++++
>>   include/uapi/misc/uacce/uacce.h              |  41 ++
>>   8 files changed, 1286 insertions(+)
>>   create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce
>>   create mode 100644 drivers/misc/uacce/Kconfig
>>   create mode 100644 drivers/misc/uacce/Makefile
>>   create mode 100644 drivers/misc/uacce/uacce.c
>>   create mode 100644 include/linux/uacce.h
>>   create mode 100644 include/uapi/misc/uacce/uacce.h
>>
> [...]
>
>> diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
>> new file mode 100644
>> index 0000000..534ddc3
>> --- /dev/null
>> +++ b/drivers/misc/uacce/uacce.c
>> @@ -0,0 +1,995 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +#include <linux/compat.h>
>> +#include <linux/dma-iommu.h>
>> +#include <linux/file.h>
>> +#include <linux/irqdomain.h>
>> +#include <linux/module.h>
>> +#include <linux/poll.h>
>> +#include <linux/sched/signal.h>
>> +#include <linux/uacce.h>
>> +
>> +static struct class *uacce_class;
>> +static DEFINE_IDR(uacce_idr);
>> +static dev_t uacce_devt;
>> +static DEFINE_MUTEX(uacce_mutex);
>> +static const struct file_operations uacce_fops;
>> +
>> +static int uacce_queue_map_qfr(struct uacce_queue *q,
>> +			       struct uacce_qfile_region *qfr)
>> +{
>> +	struct device *dev = q->uacce->pdev;
>> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
>> +	int i, j, ret;
>> +
>> +	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
>> +		return 0;
>> +
>> +	if (!domain)
>> +		return -ENODEV;
>> +
>> +	for (i = 0; i < qfr->nr_pages; i++) {
>> +		ret = iommu_map(domain, qfr->iova + i * PAGE_SIZE,
>> +				page_to_phys(qfr->pages[i]),
>> +				PAGE_SIZE, qfr->prot | q->uacce->prot);
>> +		if (ret)
>> +			goto err_with_map_pages;
>> +
>> +		get_page(qfr->pages[i]);
>> +	}
>> +
>> +	return 0;
>> +
>> +err_with_map_pages:
>> +	for (j = i - 1; j >= 0; j--) {
>> +		iommu_unmap(domain, qfr->iova + j * PAGE_SIZE, PAGE_SIZE);
>> +		put_page(qfr->pages[j]);
>> +	}
>> +	return ret;
>> +}
>> +
>> +static void uacce_queue_unmap_qfr(struct uacce_queue *q,
>> +				  struct uacce_qfile_region *qfr)
>> +{
>> +	struct device *dev = q->uacce->pdev;
>> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
>> +	int i;
>> +
>> +	if (!domain || !qfr)
>> +		return;
>> +
>> +	if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
>> +		return;
>> +
>> +	for (i = qfr->nr_pages - 1; i >= 0; i--) {
>> +		iommu_unmap(domain, qfr->iova + i * PAGE_SIZE, PAGE_SIZE);
>> +		put_page(qfr->pages[i]);
>> +	}
>> +}
>> +
>> +static int uacce_qfr_alloc_pages(struct uacce_qfile_region *qfr)
>> +{
>> +	int i, j;
>> +
>> +	qfr->pages = kcalloc(qfr->nr_pages, sizeof(*qfr->pages), GFP_ATOMIC);
>> +	if (!qfr->pages)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < qfr->nr_pages; i++) {
>> +		qfr->pages[i] = alloc_page(GFP_ATOMIC | __GFP_ZERO);
>> +		if (!qfr->pages[i])
>> +			goto err_with_pages;
>> +	}
>> +
>> +	return 0;
>> +
>> +err_with_pages:
>> +	for (j = i - 1; j >= 0; j--)
>> +		put_page(qfr->pages[j]);
>> +
>> +	kfree(qfr->pages);
>> +	return -ENOMEM;
>> +}
>> +
>> +static void uacce_qfr_free_pages(struct uacce_qfile_region *qfr)
>> +{
>> +	int i;
>> +
>> +	for (i = 0; i < qfr->nr_pages; i++)
>> +		put_page(qfr->pages[i]);
>> +
>> +	kfree(qfr->pages);
>> +}
>> +
>> +static inline int uacce_queue_mmap_qfr(struct uacce_queue *q,
>> +				       struct uacce_qfile_region *qfr,
>> +				       struct vm_area_struct *vma)
>> +{
>> +	int i, ret;
>> +
>> +	for (i = 0; i < qfr->nr_pages; i++) {
>> +		ret = remap_pfn_range(vma, vma->vm_start + (i << PAGE_SHIFT),
>> +				      page_to_pfn(qfr->pages[i]), PAGE_SIZE,
>> +				      vma->vm_page_prot);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static struct uacce_qfile_region *
>> +uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma,
>> +		    enum uacce_qfrt type, unsigned int flags)
>> +{
>> +	struct uacce_qfile_region *qfr;
>> +	struct uacce_device *uacce = q->uacce;
>> +	unsigned long vm_pgoff;
>> +	int ret = -ENOMEM;
>> +
>> +	qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC);
>> +	if (!qfr)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	qfr->type = type;
>> +	qfr->flags = flags;
>> +	qfr->iova = vma->vm_start;
>> +	qfr->nr_pages = vma_pages(vma);
>> +
>> +	if (vma->vm_flags & VM_READ)
>> +		qfr->prot |= IOMMU_READ;
>> +
>> +	if (vma->vm_flags & VM_WRITE)
>> +		qfr->prot |= IOMMU_WRITE;
>> +
>> +	if (flags & UACCE_QFRF_SELFMT) {
>> +		if (!uacce->ops->mmap) {
>> +			ret = -EINVAL;
>> +			goto err_with_qfr;
>> +		}
>> +
>> +		ret = uacce->ops->mmap(q, vma, qfr);
>> +		if (ret)
>> +			goto err_with_qfr;
>> +		return qfr;
>> +	}
>> +
>> +	/* allocate memory */
>> +	if (flags & UACCE_QFRF_DMA) {
>> +		qfr->kaddr = dma_alloc_coherent(uacce->pdev,
>> +						qfr->nr_pages << PAGE_SHIFT,
>> +						&qfr->dma, GFP_KERNEL);
>> +		if (!qfr->kaddr) {
>> +			ret = -ENOMEM;
>> +			goto err_with_qfr;
>> +		}
>> +	} else {
>> +		ret = uacce_qfr_alloc_pages(qfr);
>> +		if (ret)
>> +			goto err_with_qfr;
>> +	}
>> +
>> +	/* map to device */
>> +	ret = uacce_queue_map_qfr(q, qfr);
>> +	if (ret)
>> +		goto err_with_pages;
>> +
>> +	/* mmap to user space */
>> +	if (flags & UACCE_QFRF_MMAP) {
>> +		if (flags & UACCE_QFRF_DMA) {
>> +			/* dma_mmap_coherent() requires vm_pgoff as 0
>> +			 * restore vm_pfoff to initial value for mmap()
>> +			 */
> I would argue that the dma_mmap_coherent() is not the right function
> to use here you might be better of doing remap_pfn_range() on your
> own.
>
> Working around existing API is not something you want to do, it can
> easily break and it makes it harder for people who want to update that
> API to not break anyone.
Here dma_mmap_coherent is mmap the dma_alloc_coherent,
Will remove dma api first and only consider sva case.
>
>> +			vm_pgoff = vma->vm_pgoff;
>> +			vma->vm_pgoff = 0;
>> +			ret = dma_mmap_coherent(uacce->pdev, vma, qfr->kaddr,
>> +						qfr->dma,
>> +						qfr->nr_pages << PAGE_SHIFT);
>> +			vma->vm_pgoff = vm_pgoff;
>> +		} else {
>> +			ret = uacce_queue_mmap_qfr(q, qfr, vma);
>> +		}
>> +
>> +		if (ret)
>> +			goto err_with_mapped_qfr;
>> +	}
>> +
>> +	return qfr;
>> +
>> +err_with_mapped_qfr:
>> +	uacce_queue_unmap_qfr(q, qfr);
>> +err_with_pages:
>> +	if (flags & UACCE_QFRF_DMA)
>> +		dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
>> +				  qfr->kaddr, qfr->dma);
>> +	else
>> +		uacce_qfr_free_pages(qfr);
>> +err_with_qfr:
>> +	kfree(qfr);
>> +
>> +	return ERR_PTR(ret);
>> +}
>> +
>> +static void uacce_destroy_region(struct uacce_queue *q,
>> +				 struct uacce_qfile_region *qfr)
>> +{
>> +	struct uacce_device *uacce = q->uacce;
>> +
>> +	if (qfr->flags & UACCE_QFRF_DMA) {
>> +		dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
>> +				  qfr->kaddr, qfr->dma);
>> +	} else if (qfr->pages) {
>> +		if (qfr->flags & UACCE_QFRF_KMAP && qfr->kaddr) {
>> +			vunmap(qfr->kaddr);
>> +			qfr->kaddr = NULL;
>> +		}
>> +
>> +		uacce_qfr_free_pages(qfr);
>> +	}
>> +	kfree(qfr);
>> +}
>> +
>> +static long uacce_cmd_share_qfr(struct uacce_queue *tgt, int fd)
> It would be nice to comment what this function does, AFAICT it tries
> to share a region uacce_qfile_region. Anyway this should be remove
> altogether as it is not use by your first driver.
Will only consider sva case in the first patch, and remove this ioctl then.
>
>> +{
>> +	struct file *filep;
>> +	struct uacce_queue *src;
>> +	int ret = -EINVAL;
>> +
>> +	mutex_lock(&uacce_mutex);
>> +
>> +	if (tgt->state != UACCE_Q_STARTED)
>> +		goto out_with_lock;
>> +
>> +	filep = fget(fd);
>> +	if (!filep)
>> +		goto out_with_lock;
>> +
>> +	if (filep->f_op != &uacce_fops)
>> +		goto out_with_fd;
>> +
>> +	src = filep->private_data;
>> +	if (!src)
>> +		goto out_with_fd;
>> +
>> +	if (tgt->uacce->flags & UACCE_DEV_SVA)
>> +		goto out_with_fd;
>> +
>> +	if (!src->qfrs[UACCE_QFRT_SS] || tgt->qfrs[UACCE_QFRT_SS])
>> +		goto out_with_fd;
>> +
>> +	ret = uacce_queue_map_qfr(tgt, src->qfrs[UACCE_QFRT_SS]);
>> +	if (ret)
>> +		goto out_with_fd;
>> +
>> +	tgt->qfrs[UACCE_QFRT_SS] = src->qfrs[UACCE_QFRT_SS];
>> +	list_add(&tgt->list, &src->qfrs[UACCE_QFRT_SS]->qs);
> This list_add() seems bogus as the src->qfrs would already be
> on a list so you are corrupting the list it is on.
>
>> +
>> +out_with_fd:
>> +	fput(filep);
>> +out_with_lock:
>> +	mutex_unlock(&uacce_mutex);
>> +	return ret;
>> +}
> [...]
>
>> +static long uacce_fops_unl_ioctl(struct file *filep,
>> +				 unsigned int cmd, unsigned long arg)
> You need to documents properly all ioctl and also you need to
> remove those that are not use by your first driver. They will
> just bit rot as we do not know if they will ever be use.
OK, understand.
>
>> +{
>> +	struct uacce_queue *q = filep->private_data;
>> +	struct uacce_device *uacce = q->uacce;
>> +
>> +	switch (cmd) {
>> +	case UACCE_CMD_SHARE_SVAS:
>> +		return uacce_cmd_share_qfr(q, arg);
>> +
>> +	case UACCE_CMD_START:
>> +		return uacce_start_queue(q);
>> +
>> +	case UACCE_CMD_PUT_Q:
>> +		return uacce_put_queue(q);
>> +
>> +	default:
>> +		if (!uacce->ops->ioctl)
>> +			return -EINVAL;
>> +
>> +		return uacce->ops->ioctl(q, cmd, arg);
>> +	}
>> +}
>> +
> [...]
>
>> +
>> +static int uacce_dev_open_check(struct uacce_device *uacce)
>> +{
>> +	if (uacce->flags & UACCE_DEV_SVA)
>> +		return 0;
>> +
>> +	/*
>> +	 * The device can be opened once if it does not support pasid
>> +	 */
>> +	if (kref_read(&uacce->cdev->kobj.kref) > 2)
>> +		return -EBUSY;
> You do not check if the device support pasid so comments does not
> match code. Right now code says that you can not open a device more
> than once. Also this check is racy there is no lock protecting the
> read.
Will remove this, though it is atomic.
!sva case does not have such limitation.
>
>> +
>> +	return 0;
>> +}
>> +
>> +static int uacce_fops_open(struct inode *inode, struct file *filep)
>> +{
>> +	struct uacce_queue *q;
>> +	struct iommu_sva *handle = NULL;
>> +	struct uacce_device *uacce;
>> +	int ret;
>> +	int pasid = 0;
>> +
>> +	uacce = idr_find(&uacce_idr, iminor(inode));
>> +	if (!uacce)
>> +		return -ENODEV;
>> +
>> +	if (!try_module_get(uacce->pdev->driver->owner))
>> +		return -ENODEV;
>> +
>> +	ret = uacce_dev_open_check(uacce);
>> +	if (ret)
>> +		goto out_with_module;
>> +
>> +	if (uacce->flags & UACCE_DEV_SVA) {
>> +		handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL);
>> +		if (IS_ERR(handle))
>> +			goto out_with_module;
>> +		pasid = iommu_sva_get_pasid(handle);
>> +	}
> The file descriptor can outlive the mm (through fork) what happens
> when the mm dies ? Where is the sva_unbind ? Maybe in iommu code.
> At very least a comment should be added explaining what happens.
unbind is in the uacce_fops_release.
Will register mm_exit for sva case.
>
>> +
>> +	q = kzalloc(sizeof(struct uacce_queue), GFP_KERNEL);
>> +	if (!q) {
>> +		ret = -ENOMEM;
>> +		goto out_with_module;
>> +	}
>> +
>> +	if (uacce->ops->get_queue) {
>> +		ret = uacce->ops->get_queue(uacce, pasid, q);
>> +		if (ret < 0)
>> +			goto out_with_mem;
>> +	}
>> +
>> +	q->pasid = pasid;
>> +	q->handle = handle;
>> +	q->uacce = uacce;
>> +	q->mm = current->mm;
>> +	memset(q->qfrs, 0, sizeof(q->qfrs));
>> +	INIT_LIST_HEAD(&q->list);
>> +	init_waitqueue_head(&q->wait);
>> +	filep->private_data = q;
>> +	q->state = UACCE_Q_INIT;
>> +
>> +	return 0;
>> +
>> +out_with_mem:
>> +	kfree(q);
>> +out_with_module:
>> +	module_put(uacce->pdev->driver->owner);
>> +	return ret;
>> +}
>> +
>> +static int uacce_fops_release(struct inode *inode, struct file *filep)
>> +{
>> +	struct uacce_queue *q = filep->private_data;
>> +	struct uacce_qfile_region *qfr;
>> +	struct uacce_device *uacce = q->uacce;
>> +	bool is_to_free_region;
>> +	int free_pages = 0;
>> +	int i;
>> +
>> +	mutex_lock(&uacce_mutex);
>> +
>> +	if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
>> +		uacce->ops->stop_queue(q);
>> +
>> +	for (i = 0; i < UACCE_QFRT_MAX; i++) {
>> +		qfr = q->qfrs[i];
>> +		if (!qfr)
>> +			continue;
>> +
>> +		is_to_free_region = false;
>> +		uacce_queue_unmap_qfr(q, qfr);
>> +		if (i == UACCE_QFRT_SS) {
>> +			list_del(&q->list);
>> +			if (list_empty(&qfr->qs))
>> +				is_to_free_region = true;
>> +		} else
>> +			is_to_free_region = true;
>> +
>> +		if (is_to_free_region) {
>> +			free_pages += qfr->nr_pages;
>> +			uacce_destroy_region(q, qfr);
>> +		}
>> +
>> +		qfr = NULL;
>> +	}
>> +
>> +	if (current->mm == q->mm) {
>> +		down_write(&q->mm->mmap_sem);
>> +		q->mm->data_vm -= free_pages;
>> +		up_write(&q->mm->mmap_sem);
> This is bogus you do not get any reference on the mm through
> mmgrab() so there is nothing protection the q->mm from being
> release. Note that you do not want to do mmgrab() in open as
> the file descriptor can outlive the mm.
Will remove this.
>
>> +	}
>> +
>> +	if (uacce->flags & UACCE_DEV_SVA)
>> +		iommu_sva_unbind_device(q->handle);
>> +
>> +	if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) &&
>> +	     uacce->ops->put_queue)
>> +		uacce->ops->put_queue(q);
>> +
>> +	kfree(q);
>> +	mutex_unlock(&uacce_mutex);
>> +
>> +	module_put(uacce->pdev->driver->owner);
> As the file can outlive the process it might also outlive the module
> maybe you want to keep a reference on the module as part of the region
> and release it in uacce_destroy_region()
module_get and module_put already considers refs, so we may not needed 
any more.
>
>> +
>> +	return 0;
>> +}
>> +
>> +static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
>> +{
>> +	struct uacce_queue *q = filep->private_data;
>> +	struct uacce_device *uacce = q->uacce;
>> +	struct uacce_qfile_region *qfr;
>> +	enum uacce_qfrt type = 0;
>> +	unsigned int flags = 0;
>> +	int ret;
>> +
>> +	if (vma->vm_pgoff < UACCE_QFRT_MAX)
>> +		type = vma->vm_pgoff;
>> +
>> +	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> Don't you also want VM_WIPEONFORK ?
Looks it is required, thanks.
>
>> +
>> +	mutex_lock(&uacce_mutex);
>> +
>> +	/* fixme: if the region need no pages, we don't need to check it */
>> +	if (q->mm->data_vm + vma_pages(vma) >
>> +	    rlimit(RLIMIT_DATA) >> PAGE_SHIFT) {
>> +		ret = -ENOMEM;
>> +		goto out_with_lock;
>> +	}
>> +
>> +	if (q->qfrs[type]) {
>> +		ret = -EBUSY;
> What about -EEXIST ? That test checks if a region of given
> type already exist for the uacce_queue which is private to
> that file descriptor. So it means that userspace which did
> open the file is trying to create again the same region type
> which already exist.
Good idea.
>
>> +		goto out_with_lock;
>> +	}
>> +
>> +	switch (type) {
>> +	case UACCE_QFRT_MMIO:
>> +		flags = UACCE_QFRF_SELFMT;
>> +		break;
>> +
>> +	case UACCE_QFRT_SS:
>> +		if (q->state != UACCE_Q_STARTED) {
>> +			ret = -EINVAL;
>> +			goto out_with_lock;
>> +		}
>> +
>> +		if (uacce->flags & UACCE_DEV_SVA) {
>> +			ret = -EINVAL;
>> +			goto out_with_lock;
>> +		}
>> +
>> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
>> +
>> +		break;
>> +
>> +	case UACCE_QFRT_DKO:
>> +		if (uacce->flags & UACCE_DEV_SVA) {
>> +			ret = -EINVAL;
>> +			goto out_with_lock;
>> +		}
>> +
>> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_KMAP;
>> +
>> +		break;
>> +
>> +	case UACCE_QFRT_DUS:
>> +		if (uacce->flags & UACCE_DEV_SVA) {
>> +			flags = UACCE_QFRF_SELFMT;
>> +			break;
>> +		}
>> +
>> +		flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
>> +		break;
>> +
>> +	default:
>> +		WARN_ON(&uacce->dev);
>> +		break;
>> +	}
>> +
>> +	qfr = uacce_create_region(q, vma, type, flags);
>> +	if (IS_ERR(qfr)) {
>> +		ret = PTR_ERR(qfr);
>> +		goto out_with_lock;
>> +	}
>> +	q->qfrs[type] = qfr;
>> +
>> +	if (type == UACCE_QFRT_SS) {
>> +		INIT_LIST_HEAD(&qfr->qs);
>> +		list_add(&q->list, &q->qfrs[type]->qs);
>> +	}
>> +
>> +	mutex_unlock(&uacce_mutex);
>> +
>> +	if (qfr->pages)
>> +		q->mm->data_vm += qfr->nr_pages;
> The mm->data_vm fields is protected by the mmap_sem taken in write
> mode AFAIR so what you are doing here is unsafe.
>
>> +
>> +	return 0;
>> +
>> +out_with_lock:
>> +	mutex_unlock(&uacce_mutex);
>> +	return ret;
>> +}
>> +
> [...]
>
>> +/* Borrowed from VFIO to fix msi translation */
>> +static bool uacce_iommu_has_sw_msi(struct iommu_group *group,
>> +				   phys_addr_t *base)
> I fail to see why you need this in a common framework this
> seems to be specific to a device.
>
>> +{
>> +	struct list_head group_resv_regions;
>> +	struct iommu_resv_region *region, *next;
>> +	bool ret = false;
>> +
>> +	INIT_LIST_HEAD(&group_resv_regions);
>> +	iommu_get_group_resv_regions(group, &group_resv_regions);
>> +	list_for_each_entry(region, &group_resv_regions, list) {
>> +		/*
>> +		 * The presence of any 'real' MSI regions should take
>> +		 * precedence over the software-managed one if the
>> +		 * IOMMU driver happens to advertise both types.
>> +		 */
>> +		if (region->type == IOMMU_RESV_MSI) {
>> +			ret = false;
>> +			break;
>> +		}
>> +
>> +		if (region->type == IOMMU_RESV_SW_MSI) {
>> +			*base = region->start;
>> +			ret = true;
>> +		}
>> +	}
>> +
>> +	list_for_each_entry_safe(region, next, &group_resv_regions, list)
>> +		kfree(region);
>> +
>> +	return ret;
>> +}
>> +
>> +static int uacce_set_iommu_domain(struct uacce_device *uacce)
>> +{
>> +	struct iommu_domain *domain;
>> +	struct iommu_group *group;
>> +	struct device *dev = uacce->pdev;
>> +	bool resv_msi;
>> +	phys_addr_t resv_msi_base = 0;
>> +	int ret;
>> +
>> +	if (uacce->flags & UACCE_DEV_SVA)
>> +		return 0;
>> +
>> +	/* allocate and attach an unmanaged domain */
>> +	domain = iommu_domain_alloc(uacce->pdev->bus);
>> +	if (!domain) {
>> +		dev_err(&uacce->dev, "cannot get domain for iommu\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	ret = iommu_attach_device(domain, uacce->pdev);
>> +	if (ret)
>> +		goto err_with_domain;
>> +
>> +	if (iommu_capable(dev->bus, IOMMU_CAP_CACHE_COHERENCY))
>> +		uacce->prot |= IOMMU_CACHE;
>> +
>> +	group = iommu_group_get(dev);
>> +	if (!group) {
>> +		ret = -EINVAL;
>> +		goto err_with_domain;
>> +	}
>> +
>> +	resv_msi = uacce_iommu_has_sw_msi(group, &resv_msi_base);
>> +	iommu_group_put(group);
>> +
>> +	if (resv_msi) {
>> +		if (!irq_domain_check_msi_remap() &&
>> +		    !iommu_capable(dev->bus, IOMMU_CAP_INTR_REMAP)) {
>> +			dev_warn(dev, "No interrupt remapping support!");
>> +			ret = -EPERM;
>> +			goto err_with_domain;
>> +		}
>> +
>> +		ret = iommu_get_msi_cookie(domain, resv_msi_base);
>> +		if (ret)
>> +			goto err_with_domain;
>> +	}
>> +
>> +	return 0;
>> +
>> +err_with_domain:
>> +	iommu_domain_free(domain);
>> +	return ret;
>> +}
>> +
>> +static void uacce_unset_iommu_domain(struct uacce_device *uacce)
>> +{
>> +	struct iommu_domain *domain;
>> +
>> +	if (uacce->flags & UACCE_DEV_SVA)
>> +		return;
>> +
>> +	domain = iommu_get_domain_for_dev(uacce->pdev);
>> +	if (!domain) {
>> +		dev_err(&uacce->dev, "bug: no domain attached to device\n");
>> +		return;
>> +	}
>> +
>> +	iommu_detach_device(domain, uacce->pdev);
>> +	iommu_domain_free(domain);
>> +}
>> +
>> +/**
>> + * uacce_register - register an accelerator
>> + * @parent: pointer of uacce parent device
>> + * @interface: pointer of uacce_interface for register
>> + */
>> +struct uacce_device *uacce_register(struct device *parent,
>> +				    struct uacce_interface *interface)
>> +{
>> +	int ret;
>> +	struct uacce_device *uacce;
>> +	unsigned int flags = interface->flags;
>> +
>> +	uacce = kzalloc(sizeof(struct uacce_device), GFP_KERNEL);
>> +	if (!uacce)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	if (flags & UACCE_DEV_SVA) {
>> +		ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA);
>> +		if (ret)
>> +			flags &= ~UACCE_DEV_SVA;
>> +	}
>> +
>> +	uacce->pdev = parent;
>> +	uacce->flags = flags;
>> +	uacce->ops = interface->ops;
>> +
>> +	ret = uacce_set_iommu_domain(uacce);
>> +	if (ret)
>> +		goto err_free;
> Why do you need to change the IOMMU domain ? This is orthogonal to
> what you are trying to achieve. Domain has nothing to do with SVA
> or userspace queue (at least not on x86 AFAIK).
>
>
>> +
>> +	mutex_lock(&uacce_mutex);
>> +
>> +	ret = idr_alloc(&uacce_idr, uacce, 0, 0, GFP_KERNEL);
>> +	if (ret < 0)
>> +		goto err_with_lock;
>> +
>> +	uacce->cdev = cdev_alloc();
>> +	uacce->cdev->ops = &uacce_fops;
>> +	uacce->dev_id = ret;
>> +	uacce->cdev->owner = THIS_MODULE;
>> +	device_initialize(&uacce->dev);
>> +	uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
>> +	uacce->dev.class = uacce_class;
>> +	uacce->dev.groups = uacce_dev_groups;
>> +	uacce->dev.parent = uacce->pdev;
>> +	uacce->dev.release = uacce_release;
>> +	dev_set_name(&uacce->dev, "%s-%d", interface->name, uacce->dev_id);
>> +	ret = cdev_device_add(uacce->cdev, &uacce->dev);
>> +	if (ret)
>> +		goto err_with_idr;
>> +
>> +	mutex_unlock(&uacce_mutex);
>> +
>> +	return uacce;
>> +
>> +err_with_idr:
>> +	idr_remove(&uacce_idr, uacce->dev_id);
>> +err_with_lock:
>> +	mutex_unlock(&uacce_mutex);
>> +	uacce_unset_iommu_domain(uacce);
>> +err_free:
>> +	if (flags & UACCE_DEV_SVA)
>> +		iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
>> +	kfree(uacce);
>> +	return ERR_PTR(ret);
>> +}
>> +EXPORT_SYMBOL_GPL(uacce_register);
> [...]
>
>> diff --git a/include/linux/uacce.h b/include/linux/uacce.h
>> new file mode 100644
>> index 0000000..8ce0640
>> --- /dev/null
>> +++ b/include/linux/uacce.h
>> @@ -0,0 +1,168 @@
>> +/* SPDX-License-Identifier: GPL-2.0-or-later */
>> +#ifndef _LINUX_UACCE_H
>> +#define _LINUX_UACCE_H
>> +
>> +#include <linux/cdev.h>
>> +#include <uapi/misc/uacce/uacce.h>
>> +
>> +#define UACCE_NAME		"uacce"
>> +
>> +struct uacce_queue;
>> +struct uacce_device;
>> +
>> +/* uacce queue file flag, requires different operation */
>> +#define UACCE_QFRF_MAP		BIT(0)	/* map to current queue */
>> +#define UACCE_QFRF_MMAP		BIT(1)	/* map to user space */
>> +#define UACCE_QFRF_KMAP		BIT(2)	/* map to kernel space */
>> +#define UACCE_QFRF_DMA		BIT(3)	/* use dma api for the region */
>> +#define UACCE_QFRF_SELFMT	BIT(4)	/* self maintained qfr */
>> +
>> +/**
>> + * struct uacce_qfile_region - structure of queue file region
>> + * @type: type of the qfr
>> + * @iova: iova share between user and device space
>> + * @pages: pages pointer of the qfr memory
>> + * @nr_pages: page numbers of the qfr memory
>> + * @prot: qfr protection flag
>> + * @flags: flags of qfr
>> + * @qs: list sharing the same region, for ss region
>> + * @kaddr: kernel addr of the qfr
>> + * @dma: dma address, if created by dma api
>> + */
>> +struct uacce_qfile_region {
>> +	enum uacce_qfrt type;
>> +	unsigned long iova;
>> +	struct page **pages;
>> +	u32 nr_pages;
>> +	u32 prot;
>> +	u32 flags;
>> +	struct list_head qs;
>> +	void *kaddr;
>> +	dma_addr_t dma;
>> +};
>> +
>> +/**
>> + * struct uacce_ops - uacce device operations
>> + * @get_available_instances:  get available instances left of the device
>> + * @get_queue: get a queue from the device
>> + * @put_queue: free a queue to the device
>> + * @start_queue: make the queue start work after get_queue
>> + * @stop_queue: make the queue stop work before put_queue
>> + * @is_q_updated: check whether the task is finished
>> + * @mask_notify: mask the task irq of queue
>> + * @mmap: mmap addresses of queue to user space
>> + * @reset: reset the uacce device
>> + * @reset_queue: reset the queue
>> + * @ioctl: ioctl for user space users of the queue
>> + */
>> +struct uacce_ops {
>> +	int (*get_available_instances)(struct uacce_device *uacce);
>> +	int (*get_queue)(struct uacce_device *uacce, unsigned long arg,
>> +			 struct uacce_queue *q);
>> +	void (*put_queue)(struct uacce_queue *q);
>> +	int (*start_queue)(struct uacce_queue *q);
>> +	void (*stop_queue)(struct uacce_queue *q);
>> +	int (*is_q_updated)(struct uacce_queue *q);
>> +	void (*mask_notify)(struct uacce_queue *q, int event_mask);
>> +	int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma,
>> +		    struct uacce_qfile_region *qfr);
>> +	int (*reset)(struct uacce_device *uacce);
>> +	int (*reset_queue)(struct uacce_queue *q);
>> +	long (*ioctl)(struct uacce_queue *q, unsigned int cmd,
>> +		      unsigned long arg);
>> +};
>> +
>> +/**
>> + * struct uacce_interface
>> + * @name: the uacce device name.  Will show up in sysfs
>> + * @flags: uacce device attributes
>> + * @ops: pointer to the struct uacce_ops
>> + *
>> + * This structure is used for the uacce_register()
>> + */
>> +struct uacce_interface {
>> +	char name[32];
> You should add a define for the maximum lenght of name.
Sure, thanks
>
>> +	unsigned int flags;
> Should be enum uacce_dev_flag not unsigned int and that
> enum should be defined above and not in uAPI see comments
> i made next to that enum.
enum uacce_dev_flag is better, thanks
>
>> +	struct uacce_ops *ops;
>> +};
>> +
>> +enum uacce_q_state {
>> +	UACCE_Q_INIT,
>> +	UACCE_Q_STARTED,
>> +	UACCE_Q_ZOMBIE,
>> +};
>> +
>> +/**
>> + * struct uacce_queue
>> + * @uacce: pointer to uacce
>> + * @priv: private pointer
>> + * @wait: wait queue head
>> + * @pasid: pasid of the queue
>> + * @handle: iommu_sva handle return from iommu_sva_bind_device
>> + * @list: share list for qfr->qs
>> + * @mm: current->mm
>> + * @qfrs: pointer of qfr regions
>> + * @state: queue state machine
>> + */
>> +struct uacce_queue {
>> +	struct uacce_device *uacce;
>> +	void *priv;
>> +	wait_queue_head_t wait;
>> +	int pasid;
>> +	struct iommu_sva *handle;
>> +	struct list_head list;
>> +	struct mm_struct *mm;
>> +	struct uacce_qfile_region *qfrs[UACCE_QFRT_MAX];
>> +	enum uacce_q_state state;
>> +};
>> +
>> +/**
>> + * struct uacce_device
>> + * @algs: supported algorithms
>> + * @api_ver: api version
>> + * @qf_pg_size: page size of the queue file regions
>> + * @ops: pointer to the struct uacce_ops
>> + * @pdev: pointer to the parent device
>> + * @is_vf: whether virtual function
>> + * @flags: uacce attributes
>> + * @dev_id: id of the uacce device
>> + * @prot: uacce protection flag
>> + * @cdev: cdev of the uacce
>> + * @dev: dev of the uacce
>> + * @priv: private pointer of the uacce
>> + */
>> +struct uacce_device {
>> +	const char *algs;
>> +	const char *api_ver;
>> +	unsigned long qf_pg_size[UACCE_QFRT_MAX];
>> +	struct uacce_ops *ops;
>> +	struct device *pdev;
>> +	bool is_vf;
>> +	u32 flags;
>> +	u32 dev_id;
>> +	u32 prot;
>> +	struct cdev *cdev;
>> +	struct device dev;
>> +	void *priv;
>> +};
>> +
>> +#if IS_ENABLED(CONFIG_UACCE)
>> +
>> +struct uacce_device *uacce_register(struct device *parent,
>> +				    struct uacce_interface *interface);
>> +void uacce_unregister(struct uacce_device *uacce);
>> +
>> +#else /* CONFIG_UACCE */
>> +
>> +static inline
>> +struct uacce_device *uacce_register(struct device *parent,
>> +				    struct uacce_interface *interface)
>> +{
>> +	return ERR_PTR(-ENODEV);
>> +}
>> +
>> +static inline void uacce_unregister(struct uacce_device *uacce) {}
>> +
>> +#endif /* CONFIG_UACCE */
>> +
>> +#endif /* _LINUX_UACCE_H */
>> diff --git a/include/uapi/misc/uacce/uacce.h b/include/uapi/misc/uacce/uacce.h
>> new file mode 100644
>> index 0000000..c859668
>> --- /dev/null
>> +++ b/include/uapi/misc/uacce/uacce.h
>> @@ -0,0 +1,41 @@
>> +/* SPDX-License-Identifier: GPL-2.0-or-later */
>> +#ifndef _UAPIUUACCE_H
>> +#define _UAPIUUACCE_H
>> +
>> +#include <linux/types.h>
>> +#include <linux/ioctl.h>
>> +
>> +#define UACCE_CMD_SHARE_SVAS	_IO('W', 0)
>> +#define UACCE_CMD_START		_IO('W', 1)
>> +#define UACCE_CMD_PUT_Q		_IO('W', 2)
>> +
>> +/**
>> + * enum uacce_dev_flag: Device flags:
>> + * @UACCE_DEV_SHARE_DOMAIN: no PASID, can share sva for one process
>> + * @UACCE_DEV_SVA: Shared Virtual Addresses
>> + *		   Support PASID
>> + *		   Support device page fault (pcie device) or
>> + *		   smmu stall (platform device)
>> + */
>> +enum uacce_dev_flag {
>> +	UACCE_DEV_SHARE_DOMAIN = 0x0,
> UACCE_DEV_SHARE_DOMAIN is not use anywhere better do not introduce something
> that is not use.
Yes, will remove it.
>
>
>> +	UACCE_DEV_SVA = 0x1,
>> +};
> More general question why is it part of the UAPI header file ?
> To me it seems that those flags are only use internaly to the
> kernel and never need to be expose to userspace.
The flags are required by user app.
User can get flags via sysfs and understand what type it is.
For example, when flags & UACCE_DEV_SVA, malloced memory can be used.
>
>> +
>> +/**
>> + * enum uacce_qfrt: qfrt type
>> + * @UACCE_QFRT_MMIO: device mmio region
>> + * @UACCE_QFRT_DKO: device kernel-only region
>> + * @UACCE_QFRT_DUS: device user share region
>> + * @UACCE_QFRT_SS: static shared memory region
>> + * @UACCE_QFRT_MAX: indicate the boundary
>> + */
> Your first driver only use DUS and MMIO, you should not define
> thing that are not even use by the first driver, especialy when
> it comes to userspace API.
Sure, thanks
>
>> +enum uacce_qfrt {
>> +	UACCE_QFRT_MMIO = 0,
>> +	UACCE_QFRT_DKO = 1,
>> +	UACCE_QFRT_DUS = 2,
>> +	UACCE_QFRT_SS = 3,
>> +	UACCE_QFRT_MAX = 16,
> Isn't 16 bit low ? Do you really need a maximum ? I would not
> expose or advertise a maxmimum in this userspace facing header.
Good idea, will remove UACCE_QFRT_MAX and let it open.

Thanks
>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
       [not found]       ` <5db257c6.1c69fb81.bfe34.a4afSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-25  7:04         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 15+ messages in thread
From: Jean-Philippe Brucker @ 2019-10-25  7:04 UTC (permalink / raw)
  To: zhangfei.gao
  Cc: Jerome Glisse, Zhangfei Gao, francois.ozog, Herbert Xu,
	Arnd Bergmann, Greg Kroah-Hartman, Zaibo Xu, ilias.apalodimas,
	linux-kernel, linux-crypto, Wangzhou, grant.likely,
	haojian . zhuang, Kenneth Lee, linux-accelerators,
	kenneth-lee-2012

On Fri, Oct 25, 2019 at 10:02:36AM +0800, zhangfei.gao@foxmail.com wrote:
> > It already exist it is call mmu notifier, you can register an mmu notifier
> > and get callback once the mm exit.
> Currently we register mm_exit for sva path, as suggested by Jean.

Yes that's called from a release() mmu_notifier callback. 

> static struct iommu_sva_ops uacce_sva_ops = {
>         .mm_exit = uacce_sva_exit,
> };
> iommu_sva_set_ops(handle, &uacce_sva_ops);
> 
> Still not certain do we have to register mm_exit for both case,
> sva and !sva, since it is a common situation.

I was wondering about that. For !SVA, since all DMA memory is mapped
through mmap, you'll be notified by the vma->vm_ops->close() callback when
the mm disappears, so I don't think you need a mm release notifier.

However that callback will just remove the IOMMU mappings and invalidate
TLBs, but the queue might still be running and since no fault handler is
registered, the IOMMU driver will flood the kernel logs with translation
faults. Similarly, the user can simply start the queue and call munmap()
for the same result, or even just program the queue with invalid DMA
addresses.

What we could do is when !SVA, register an IOMMU fault handler (the shiny
new iommu_register_device_fault_handler()) and consume all faults
ourselves. That will at least prevent userspace from flooding the kernel
logs. Even better, as soon as we get a fault notification, we could stop
the queue associated to that DMA address to prevent further faults, though
we'll still receive those that are already pending in the fault queue.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/3] uacce: add uacce driver
       [not found]         ` <5db25e56.1c69fb81.4fe57.380cSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-25  7:35           ` Jean-Philippe Brucker
  0 siblings, 0 replies; 15+ messages in thread
From: Jean-Philippe Brucker @ 2019-10-25  7:35 UTC (permalink / raw)
  To: zhangfei.gao
  Cc: Zhangfei Gao, Greg Kroah-Hartman, Arnd Bergmann, Herbert Xu,
	jonathan.cameron, grant.likely, ilias.apalodimas, francois.ozog,
	kenneth-lee-2012, Wangzhou, haojian . zhuang, linux-accelerators,
	linux-kernel, linux-crypto, Kenneth Lee, Zaibo Xu

On Fri, Oct 25, 2019 at 10:28:30AM +0800, zhangfei.gao@foxmail.com wrote:
> > Something else I noticed is uacce_idr isn't currently protected. The IDR
> > API expected the caller to use its own locking scheme. You could replace
> > it with an xarray, which I think is preferred to IDR now and provides a
> > xa_lock.
> Currently  idr_alloc and idr_remove are simply protected by uacce_mutex,

Ah right, but idr_find() also needs to be protected? 

> Will check xarray, looks it is more complicated then idr.

Having tried both, it can easily replace idr. For uacce I think it could
be something like (locking included):

	static DEFINE_XARRAY_ALLOC(uacce_xa);

	uacce = xa_load(&uacce_xa, iminor(inode));

	ret = xa_alloc(&uacce_xa, &uacce->dev_id, uacce, xa_limit_32b,
		       GFP_KERNEL);

	xa_erase(&uacce_xa, uacce->dev_id);

Thanks,
Jean

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-10-25  7:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-16  8:34 [PATCH v6 0/3] Add uacce module for Accelerator Zhangfei Gao
2019-10-16  8:34 ` [PATCH v6 1/3] uacce: Add documents for uacce Zhangfei Gao
2019-10-16 18:36   ` Jean-Philippe Brucker
     [not found]     ` <5da81d06.1c69fb81.395d6.c080SMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-21 13:34       ` Jean-Philippe Brucker
2019-10-16  8:34 ` [PATCH v6 2/3] uacce: add uacce driver Zhangfei Gao
2019-10-16 17:28   ` Jean-Philippe Brucker
     [not found]     ` <5da9a9cd.1c69fb81.9f8e8.60faSMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-23  7:42       ` Jean-Philippe Brucker
2019-10-23 17:03         ` Jerome Glisse
     [not found]         ` <5db25e56.1c69fb81.4fe57.380cSMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-25  7:35           ` Jean-Philippe Brucker
2019-10-23 16:58     ` Jerome Glisse
     [not found]       ` <5db257c6.1c69fb81.bfe34.a4afSMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-25  7:04         ` Jean-Philippe Brucker
2019-10-22 18:49   ` Jerome Glisse
     [not found]     ` <20191024064129.GB17723@kllp10>
2019-10-24 14:17       ` Jerome Glisse
2019-10-25  7:01     ` zhangfei
2019-10-16  8:34 ` [PATCH v6 3/3] crypto: hisilicon - register zip engine to uacce Zhangfei Gao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).