* [RFC v3 0/4] SMMUv3 Driver
@ 2017-12-05 3:59 Sameer Goel
2017-12-05 3:59 ` [RFC v3 1/4] Port WARN_ON_ONCE() from Linux Sameer Goel
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Sameer Goel @ 2017-12-05 3:59 UTC (permalink / raw)
To: xen-devel, julien.grall, mjaggi; +Cc: Sameer Goel, sstabellini, shankerd
This RFC addresses the review comments from the last RFC [1].
All the IORT realted changes have been droped in this version as these will be
covered by [2]. The IORT implementation had to provide a Linux like API to the
SMMUv3 driver.
List of changes:
- Addition of a linux_compat header.
- Addition of a common header for arm smmu defines.
- Rebase of the SMMUv3 driver to the driver in linux kernel 4.14 rc7.
[1] https://www.mail-archive.com/xen-devel@lists.xen.org/msg123077.html
[2] https://www.mail-archive.com/xen-devel@lists.xen.org/msg128989.html
Sameer Goel (4):
Port WARN_ON_ONCE() from Linux
xen/linux_compat: Add a Linux compat header
Add verbatim copy of arm-smmu-v3.c from Linux
xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
xen/drivers/Kconfig | 2 +
xen/drivers/passthrough/arm/Kconfig | 14 +
xen/drivers/passthrough/arm/Makefile | 3 +-
xen/drivers/passthrough/arm/arm_smmu.h | 189 ++
xen/drivers/passthrough/arm/smmu-v3.c | 3388 ++++++++++++++++++++++++++++++++
xen/include/xen/lib.h | 11 +
xen/include/xen/linux_compat.h | 106 +
7 files changed, 3712 insertions(+), 1 deletion(-)
create mode 100644 xen/drivers/passthrough/arm/Kconfig
create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c
create mode 100644 xen/include/xen/linux_compat.h
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* [RFC v3 1/4] Port WARN_ON_ONCE() from Linux
2017-12-05 3:59 [RFC v3 0/4] SMMUv3 Driver Sameer Goel
@ 2017-12-05 3:59 ` Sameer Goel
2017-12-05 9:18 ` Jan Beulich
2017-12-05 3:59 ` [RFC v3 2/4] xen/linux_compat: Add a Linux compat header Sameer Goel
` (2 subsequent siblings)
3 siblings, 1 reply; 19+ messages in thread
From: Sameer Goel @ 2017-12-05 3:59 UTC (permalink / raw)
To: xen-devel, julien.grall, mjaggi
Cc: sstabellini, wei.liu2, george.dunlap, Andrew.Cooper3, jbeulich,
Ian.Jackson, Sameer Goel, nd, shankerd
Port WARN_ON_ONCE macro from Linux. A return value is expected from this
macro, so the implementation does not follow the Xen convention of wrapping
macros in a do..while.
---
xen/include/xen/lib.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index ed00ae1..83206c0 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -11,6 +11,17 @@
#define BUG_ON(p) do { if (unlikely(p)) BUG(); } while (0)
#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
+#define WARN_ON_ONCE(p) ({ \
+ static bool __section(".data.unlikely") __warned; \
+ int __ret_warn_once = !!(p); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+ __warned = true; \
+ WARN_ON(1); \
+ } \
+ unlikely(__ret_warn_once); \
+})
+
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
/* Force a compilation error if condition is true */
#define BUILD_BUG_ON(cond) ({ _Static_assert(!(cond), "!(" #cond ")"); })
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC v3 2/4] xen/linux_compat: Add a Linux compat header
2017-12-05 3:59 [RFC v3 0/4] SMMUv3 Driver Sameer Goel
2017-12-05 3:59 ` [RFC v3 1/4] Port WARN_ON_ONCE() from Linux Sameer Goel
@ 2017-12-05 3:59 ` Sameer Goel
2017-12-05 9:20 ` Jan Beulich
2017-12-05 12:31 ` Julien Grall
2017-12-05 3:59 ` [RFC v3 3/4] Add verbatim copy of arm-smmu-v3.c from Linux Sameer Goel
2017-12-05 3:59 ` [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver Sameer Goel
3 siblings, 2 replies; 19+ messages in thread
From: Sameer Goel @ 2017-12-05 3:59 UTC (permalink / raw)
To: xen-devel, julien.grall, mjaggi
Cc: sstabellini, wei.liu2, george.dunlap, Andrew.Cooper3, jbeulich,
Ian.Jackson, Sameer Goel, nd, shankerd
For porting files from Linux it is useful to have a Linux API to Xen API
mapping header at a common location.
This file adds common API functions and other defines that are needed for
porting arm SMMU drivers.
---
xen/include/xen/linux_compat.h | 106 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 106 insertions(+)
create mode 100644 xen/include/xen/linux_compat.h
diff --git a/xen/include/xen/linux_compat.h b/xen/include/xen/linux_compat.h
new file mode 100644
index 0000000..217e0cc
--- /dev/null
+++ b/xen/include/xen/linux_compat.h
@@ -0,0 +1,106 @@
+/******************************************************************************
+ * include/xen/linux_compat.h
+ *
+ * Compatibility defines for porting code from Linux to Xen
+ *
+ * Copyright (c) 2017 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __XEN_LINUX_COMPAT_H__
+#define __XEN_LINUX_COMPAT_H__
+
+#include <asm/types.h>
+
+typedef paddr_t phys_addr_t;
+typedef paddr_t dma_addr_t;
+
+/* Alias to Xen device tree helpers */
+#define device_node dt_device_node
+#define of_phandle_args dt_phandle_args
+#define of_device_id dt_device_match
+#define of_match_node dt_match_node
+#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
+#define of_property_read_bool dt_property_read_bool
+#define of_parse_phandle_with_args dt_parse_phandle_with_args
+/* The user should consider if it is safe to treat mutex as a spinlock */
+#define mutex spinlock_t
+#define mutex_init spin_lock_init
+#define mutex_lock spin_lock
+#define mutex_unlock spin_unlock
+
+#define ilog2 LOG_2
+
+#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
+({ \
+ s_time_t deadline = NOW() + MICROSECS(timeout_us); \
+ for (;;) \
+ { \
+ (val) = op(addr); \
+ if ( cond ) \
+ break; \
+ if ( NOW() > deadline ) \
+ { \
+ (val) = op(addr); \
+ break; \
+ } \
+ udelay(sleep_us); \
+ } \
+ (cond) ? 0 : -ETIMEDOUT; \
+})
+
+#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
+ readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
+
+/* Xen: Helpers for IRQ functions */
+#define request_irq(irq, func, flags, name, dev) request_irq(irq, flags, func, name, dev)
+#define free_irq release_irq
+
+enum irqreturn {
+ IRQ_NONE = (0 << 0),
+ IRQ_HANDLED = (1 << 0),
+ IRQ_WAKE_THREAD = (2 << 0),
+};
+
+typedef enum irqreturn irqreturn_t;
+
+/* Device logger functions */
+#define dev_print(dev, lvl, fmt, ...) \
+ printk(lvl fmt, ## __VA_ARGS__)
+
+#define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
+#define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
+#define dev_warn(dev, fmt, ...) dev_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
+#define dev_err(dev, fmt, ...) dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+#define dev_info(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
+
+#define dev_err_ratelimited(dev, fmt, ...) \
+ dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+
+#define dev_name(dev) dt_node_full_name(dev_to_dt(dev))
+
+/* Alias to Xen allocation helpers */
+#define kfree xfree
+#define kmalloc(size, flags) _xmalloc(size, sizeof(void *))
+#define kzalloc(size, flags) _xzalloc(size, sizeof(void *))
+#define devm_kzalloc(dev, size, flags) _xzalloc(size, sizeof(void *))
+#define kmalloc_array(size, n, flags) _xmalloc_array(size, sizeof(void *), n)
+
+/* Alias to Xen time functions */
+#define ktime_t s_time_t
+#define ktime_add_us(t,i) (NOW() + MICROSECS(i))
+#define ktime_compare(t,i) (NOW() > (i))
+
+#endif /* __XEN_LINUX_COMPAT_H__ */
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC v3 3/4] Add verbatim copy of arm-smmu-v3.c from Linux
2017-12-05 3:59 [RFC v3 0/4] SMMUv3 Driver Sameer Goel
2017-12-05 3:59 ` [RFC v3 1/4] Port WARN_ON_ONCE() from Linux Sameer Goel
2017-12-05 3:59 ` [RFC v3 2/4] xen/linux_compat: Add a Linux compat header Sameer Goel
@ 2017-12-05 3:59 ` Sameer Goel
2017-12-05 3:59 ` [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver Sameer Goel
3 siblings, 0 replies; 19+ messages in thread
From: Sameer Goel @ 2017-12-05 3:59 UTC (permalink / raw)
To: xen-devel, julien.grall, mjaggi; +Cc: Sameer Goel, sstabellini, shankerd
Based on commit 7aa8619a66aea52b145e04cbab4f8d6a4e5f3f3b
This is a verbatim snapshot of arm-smmu-v3.c from Linux kernel source
code.
No Xen code has been added and the file is not built.
---
xen/drivers/passthrough/arm/smmu-v3.c | 2885 +++++++++++++++++++++++++++++++++
1 file changed, 2885 insertions(+)
create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
new file mode 100644
index 0000000..e67ba6c
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -0,0 +1,2885 @@
+/*
+ * IOMMU API for ARM architected SMMUv3 implementations.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This driver is powered by bad coffee and bombay mix.
+ */
+
+#include <linux/acpi.h>
+#include <linux/acpi_iort.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/iommu.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+
+#include <linux/amba/bus.h>
+
+#include "io-pgtable.h"
+
+/* MMIO registers */
+#define ARM_SMMU_IDR0 0x0
+#define IDR0_ST_LVL_SHIFT 27
+#define IDR0_ST_LVL_MASK 0x3
+#define IDR0_ST_LVL_2LVL (1 << IDR0_ST_LVL_SHIFT)
+#define IDR0_STALL_MODEL_SHIFT 24
+#define IDR0_STALL_MODEL_MASK 0x3
+#define IDR0_STALL_MODEL_STALL (0 << IDR0_STALL_MODEL_SHIFT)
+#define IDR0_STALL_MODEL_FORCE (2 << IDR0_STALL_MODEL_SHIFT)
+#define IDR0_TTENDIAN_SHIFT 21
+#define IDR0_TTENDIAN_MASK 0x3
+#define IDR0_TTENDIAN_LE (2 << IDR0_TTENDIAN_SHIFT)
+#define IDR0_TTENDIAN_BE (3 << IDR0_TTENDIAN_SHIFT)
+#define IDR0_TTENDIAN_MIXED (0 << IDR0_TTENDIAN_SHIFT)
+#define IDR0_CD2L (1 << 19)
+#define IDR0_VMID16 (1 << 18)
+#define IDR0_PRI (1 << 16)
+#define IDR0_SEV (1 << 14)
+#define IDR0_MSI (1 << 13)
+#define IDR0_ASID16 (1 << 12)
+#define IDR0_ATS (1 << 10)
+#define IDR0_HYP (1 << 9)
+#define IDR0_COHACC (1 << 4)
+#define IDR0_TTF_SHIFT 2
+#define IDR0_TTF_MASK 0x3
+#define IDR0_TTF_AARCH64 (2 << IDR0_TTF_SHIFT)
+#define IDR0_TTF_AARCH32_64 (3 << IDR0_TTF_SHIFT)
+#define IDR0_S1P (1 << 1)
+#define IDR0_S2P (1 << 0)
+
+#define ARM_SMMU_IDR1 0x4
+#define IDR1_TABLES_PRESET (1 << 30)
+#define IDR1_QUEUES_PRESET (1 << 29)
+#define IDR1_REL (1 << 28)
+#define IDR1_CMDQ_SHIFT 21
+#define IDR1_CMDQ_MASK 0x1f
+#define IDR1_EVTQ_SHIFT 16
+#define IDR1_EVTQ_MASK 0x1f
+#define IDR1_PRIQ_SHIFT 11
+#define IDR1_PRIQ_MASK 0x1f
+#define IDR1_SSID_SHIFT 6
+#define IDR1_SSID_MASK 0x1f
+#define IDR1_SID_SHIFT 0
+#define IDR1_SID_MASK 0x3f
+
+#define ARM_SMMU_IDR5 0x14
+#define IDR5_STALL_MAX_SHIFT 16
+#define IDR5_STALL_MAX_MASK 0xffff
+#define IDR5_GRAN64K (1 << 6)
+#define IDR5_GRAN16K (1 << 5)
+#define IDR5_GRAN4K (1 << 4)
+#define IDR5_OAS_SHIFT 0
+#define IDR5_OAS_MASK 0x7
+#define IDR5_OAS_32_BIT (0 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_36_BIT (1 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_40_BIT (2 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_42_BIT (3 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_44_BIT (4 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_48_BIT (5 << IDR5_OAS_SHIFT)
+
+#define ARM_SMMU_CR0 0x20
+#define CR0_CMDQEN (1 << 3)
+#define CR0_EVTQEN (1 << 2)
+#define CR0_PRIQEN (1 << 1)
+#define CR0_SMMUEN (1 << 0)
+
+#define ARM_SMMU_CR0ACK 0x24
+
+#define ARM_SMMU_CR1 0x28
+#define CR1_SH_NSH 0
+#define CR1_SH_OSH 2
+#define CR1_SH_ISH 3
+#define CR1_CACHE_NC 0
+#define CR1_CACHE_WB 1
+#define CR1_CACHE_WT 2
+#define CR1_TABLE_SH_SHIFT 10
+#define CR1_TABLE_OC_SHIFT 8
+#define CR1_TABLE_IC_SHIFT 6
+#define CR1_QUEUE_SH_SHIFT 4
+#define CR1_QUEUE_OC_SHIFT 2
+#define CR1_QUEUE_IC_SHIFT 0
+
+#define ARM_SMMU_CR2 0x2c
+#define CR2_PTM (1 << 2)
+#define CR2_RECINVSID (1 << 1)
+#define CR2_E2H (1 << 0)
+
+#define ARM_SMMU_GBPA 0x44
+#define GBPA_ABORT (1 << 20)
+#define GBPA_UPDATE (1 << 31)
+
+#define ARM_SMMU_IRQ_CTRL 0x50
+#define IRQ_CTRL_EVTQ_IRQEN (1 << 2)
+#define IRQ_CTRL_PRIQ_IRQEN (1 << 1)
+#define IRQ_CTRL_GERROR_IRQEN (1 << 0)
+
+#define ARM_SMMU_IRQ_CTRLACK 0x54
+
+#define ARM_SMMU_GERROR 0x60
+#define GERROR_SFM_ERR (1 << 8)
+#define GERROR_MSI_GERROR_ABT_ERR (1 << 7)
+#define GERROR_MSI_PRIQ_ABT_ERR (1 << 6)
+#define GERROR_MSI_EVTQ_ABT_ERR (1 << 5)
+#define GERROR_MSI_CMDQ_ABT_ERR (1 << 4)
+#define GERROR_PRIQ_ABT_ERR (1 << 3)
+#define GERROR_EVTQ_ABT_ERR (1 << 2)
+#define GERROR_CMDQ_ERR (1 << 0)
+#define GERROR_ERR_MASK 0xfd
+
+#define ARM_SMMU_GERRORN 0x64
+
+#define ARM_SMMU_GERROR_IRQ_CFG0 0x68
+#define ARM_SMMU_GERROR_IRQ_CFG1 0x70
+#define ARM_SMMU_GERROR_IRQ_CFG2 0x74
+
+#define ARM_SMMU_STRTAB_BASE 0x80
+#define STRTAB_BASE_RA (1UL << 62)
+#define STRTAB_BASE_ADDR_SHIFT 6
+#define STRTAB_BASE_ADDR_MASK 0x3ffffffffffUL
+
+#define ARM_SMMU_STRTAB_BASE_CFG 0x88
+#define STRTAB_BASE_CFG_LOG2SIZE_SHIFT 0
+#define STRTAB_BASE_CFG_LOG2SIZE_MASK 0x3f
+#define STRTAB_BASE_CFG_SPLIT_SHIFT 6
+#define STRTAB_BASE_CFG_SPLIT_MASK 0x1f
+#define STRTAB_BASE_CFG_FMT_SHIFT 16
+#define STRTAB_BASE_CFG_FMT_MASK 0x3
+#define STRTAB_BASE_CFG_FMT_LINEAR (0 << STRTAB_BASE_CFG_FMT_SHIFT)
+#define STRTAB_BASE_CFG_FMT_2LVL (1 << STRTAB_BASE_CFG_FMT_SHIFT)
+
+#define ARM_SMMU_CMDQ_BASE 0x90
+#define ARM_SMMU_CMDQ_PROD 0x98
+#define ARM_SMMU_CMDQ_CONS 0x9c
+
+#define ARM_SMMU_EVTQ_BASE 0xa0
+#define ARM_SMMU_EVTQ_PROD 0x100a8
+#define ARM_SMMU_EVTQ_CONS 0x100ac
+#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0
+#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8
+#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc
+
+#define ARM_SMMU_PRIQ_BASE 0xc0
+#define ARM_SMMU_PRIQ_PROD 0x100c8
+#define ARM_SMMU_PRIQ_CONS 0x100cc
+#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0
+#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
+#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+
+/* Common MSI config fields */
+#define MSI_CFG0_ADDR_SHIFT 2
+#define MSI_CFG0_ADDR_MASK 0x3fffffffffffUL
+#define MSI_CFG2_SH_SHIFT 4
+#define MSI_CFG2_SH_NSH (0UL << MSI_CFG2_SH_SHIFT)
+#define MSI_CFG2_SH_OSH (2UL << MSI_CFG2_SH_SHIFT)
+#define MSI_CFG2_SH_ISH (3UL << MSI_CFG2_SH_SHIFT)
+#define MSI_CFG2_MEMATTR_SHIFT 0
+#define MSI_CFG2_MEMATTR_DEVICE_nGnRE (0x1 << MSI_CFG2_MEMATTR_SHIFT)
+
+#define Q_IDX(q, p) ((p) & ((1 << (q)->max_n_shift) - 1))
+#define Q_WRP(q, p) ((p) & (1 << (q)->max_n_shift))
+#define Q_OVERFLOW_FLAG (1 << 31)
+#define Q_OVF(q, p) ((p) & Q_OVERFLOW_FLAG)
+#define Q_ENT(q, p) ((q)->base + \
+ Q_IDX(q, p) * (q)->ent_dwords)
+
+#define Q_BASE_RWA (1UL << 62)
+#define Q_BASE_ADDR_SHIFT 5
+#define Q_BASE_ADDR_MASK 0xfffffffffffUL
+#define Q_BASE_LOG2SIZE_SHIFT 0
+#define Q_BASE_LOG2SIZE_MASK 0x1fUL
+
+/*
+ * Stream table.
+ *
+ * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
+ * 2lvl: 128k L1 entries,
+ * 256 lazy entries per table (each table covers a PCI bus)
+ */
+#define STRTAB_L1_SZ_SHIFT 20
+#define STRTAB_SPLIT 8
+
+#define STRTAB_L1_DESC_DWORDS 1
+#define STRTAB_L1_DESC_SPAN_SHIFT 0
+#define STRTAB_L1_DESC_SPAN_MASK 0x1fUL
+#define STRTAB_L1_DESC_L2PTR_SHIFT 6
+#define STRTAB_L1_DESC_L2PTR_MASK 0x3ffffffffffUL
+
+#define STRTAB_STE_DWORDS 8
+#define STRTAB_STE_0_V (1UL << 0)
+#define STRTAB_STE_0_CFG_SHIFT 1
+#define STRTAB_STE_0_CFG_MASK 0x7UL
+#define STRTAB_STE_0_CFG_ABORT (0UL << STRTAB_STE_0_CFG_SHIFT)
+#define STRTAB_STE_0_CFG_BYPASS (4UL << STRTAB_STE_0_CFG_SHIFT)
+#define STRTAB_STE_0_CFG_S1_TRANS (5UL << STRTAB_STE_0_CFG_SHIFT)
+#define STRTAB_STE_0_CFG_S2_TRANS (6UL << STRTAB_STE_0_CFG_SHIFT)
+
+#define STRTAB_STE_0_S1FMT_SHIFT 4
+#define STRTAB_STE_0_S1FMT_LINEAR (0UL << STRTAB_STE_0_S1FMT_SHIFT)
+#define STRTAB_STE_0_S1CTXPTR_SHIFT 6
+#define STRTAB_STE_0_S1CTXPTR_MASK 0x3ffffffffffUL
+#define STRTAB_STE_0_S1CDMAX_SHIFT 59
+#define STRTAB_STE_0_S1CDMAX_MASK 0x1fUL
+
+#define STRTAB_STE_1_S1C_CACHE_NC 0UL
+#define STRTAB_STE_1_S1C_CACHE_WBRA 1UL
+#define STRTAB_STE_1_S1C_CACHE_WT 2UL
+#define STRTAB_STE_1_S1C_CACHE_WB 3UL
+#define STRTAB_STE_1_S1C_SH_NSH 0UL
+#define STRTAB_STE_1_S1C_SH_OSH 2UL
+#define STRTAB_STE_1_S1C_SH_ISH 3UL
+#define STRTAB_STE_1_S1CIR_SHIFT 2
+#define STRTAB_STE_1_S1COR_SHIFT 4
+#define STRTAB_STE_1_S1CSH_SHIFT 6
+
+#define STRTAB_STE_1_S1STALLD (1UL << 27)
+
+#define STRTAB_STE_1_EATS_ABT 0UL
+#define STRTAB_STE_1_EATS_TRANS 1UL
+#define STRTAB_STE_1_EATS_S1CHK 2UL
+#define STRTAB_STE_1_EATS_SHIFT 28
+
+#define STRTAB_STE_1_STRW_NSEL1 0UL
+#define STRTAB_STE_1_STRW_EL2 2UL
+#define STRTAB_STE_1_STRW_SHIFT 30
+
+#define STRTAB_STE_1_SHCFG_INCOMING 1UL
+#define STRTAB_STE_1_SHCFG_SHIFT 44
+
+#define STRTAB_STE_2_S2VMID_SHIFT 0
+#define STRTAB_STE_2_S2VMID_MASK 0xffffUL
+#define STRTAB_STE_2_VTCR_SHIFT 32
+#define STRTAB_STE_2_VTCR_MASK 0x7ffffUL
+#define STRTAB_STE_2_S2AA64 (1UL << 51)
+#define STRTAB_STE_2_S2ENDI (1UL << 52)
+#define STRTAB_STE_2_S2PTW (1UL << 54)
+#define STRTAB_STE_2_S2R (1UL << 58)
+
+#define STRTAB_STE_3_S2TTB_SHIFT 4
+#define STRTAB_STE_3_S2TTB_MASK 0xfffffffffffUL
+
+/* Context descriptor (stage-1 only) */
+#define CTXDESC_CD_DWORDS 8
+#define CTXDESC_CD_0_TCR_T0SZ_SHIFT 0
+#define ARM64_TCR_T0SZ_SHIFT 0
+#define ARM64_TCR_T0SZ_MASK 0x1fUL
+#define CTXDESC_CD_0_TCR_TG0_SHIFT 6
+#define ARM64_TCR_TG0_SHIFT 14
+#define ARM64_TCR_TG0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_IRGN0_SHIFT 8
+#define ARM64_TCR_IRGN0_SHIFT 8
+#define ARM64_TCR_IRGN0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_ORGN0_SHIFT 10
+#define ARM64_TCR_ORGN0_SHIFT 10
+#define ARM64_TCR_ORGN0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_SH0_SHIFT 12
+#define ARM64_TCR_SH0_SHIFT 12
+#define ARM64_TCR_SH0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_EPD0_SHIFT 14
+#define ARM64_TCR_EPD0_SHIFT 7
+#define ARM64_TCR_EPD0_MASK 0x1UL
+#define CTXDESC_CD_0_TCR_EPD1_SHIFT 30
+#define ARM64_TCR_EPD1_SHIFT 23
+#define ARM64_TCR_EPD1_MASK 0x1UL
+
+#define CTXDESC_CD_0_ENDI (1UL << 15)
+#define CTXDESC_CD_0_V (1UL << 31)
+
+#define CTXDESC_CD_0_TCR_IPS_SHIFT 32
+#define ARM64_TCR_IPS_SHIFT 32
+#define ARM64_TCR_IPS_MASK 0x7UL
+#define CTXDESC_CD_0_TCR_TBI0_SHIFT 38
+#define ARM64_TCR_TBI0_SHIFT 37
+#define ARM64_TCR_TBI0_MASK 0x1UL
+
+#define CTXDESC_CD_0_AA64 (1UL << 41)
+#define CTXDESC_CD_0_R (1UL << 45)
+#define CTXDESC_CD_0_A (1UL << 46)
+#define CTXDESC_CD_0_ASET_SHIFT 47
+#define CTXDESC_CD_0_ASET_SHARED (0UL << CTXDESC_CD_0_ASET_SHIFT)
+#define CTXDESC_CD_0_ASET_PRIVATE (1UL << CTXDESC_CD_0_ASET_SHIFT)
+#define CTXDESC_CD_0_ASID_SHIFT 48
+#define CTXDESC_CD_0_ASID_MASK 0xffffUL
+
+#define CTXDESC_CD_1_TTB0_SHIFT 4
+#define CTXDESC_CD_1_TTB0_MASK 0xfffffffffffUL
+
+#define CTXDESC_CD_3_MAIR_SHIFT 0
+
+/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
+#define ARM_SMMU_TCR2CD(tcr, fld) \
+ (((tcr) >> ARM64_TCR_##fld##_SHIFT & ARM64_TCR_##fld##_MASK) \
+ << CTXDESC_CD_0_TCR_##fld##_SHIFT)
+
+/* Command queue */
+#define CMDQ_ENT_DWORDS 2
+#define CMDQ_MAX_SZ_SHIFT 8
+
+#define CMDQ_ERR_SHIFT 24
+#define CMDQ_ERR_MASK 0x7f
+#define CMDQ_ERR_CERROR_NONE_IDX 0
+#define CMDQ_ERR_CERROR_ILL_IDX 1
+#define CMDQ_ERR_CERROR_ABT_IDX 2
+
+#define CMDQ_0_OP_SHIFT 0
+#define CMDQ_0_OP_MASK 0xffUL
+#define CMDQ_0_SSV (1UL << 11)
+
+#define CMDQ_PREFETCH_0_SID_SHIFT 32
+#define CMDQ_PREFETCH_1_SIZE_SHIFT 0
+#define CMDQ_PREFETCH_1_ADDR_MASK ~0xfffUL
+
+#define CMDQ_CFGI_0_SID_SHIFT 32
+#define CMDQ_CFGI_0_SID_MASK 0xffffffffUL
+#define CMDQ_CFGI_1_LEAF (1UL << 0)
+#define CMDQ_CFGI_1_RANGE_SHIFT 0
+#define CMDQ_CFGI_1_RANGE_MASK 0x1fUL
+
+#define CMDQ_TLBI_0_VMID_SHIFT 32
+#define CMDQ_TLBI_0_ASID_SHIFT 48
+#define CMDQ_TLBI_1_LEAF (1UL << 0)
+#define CMDQ_TLBI_1_VA_MASK ~0xfffUL
+#define CMDQ_TLBI_1_IPA_MASK 0xfffffffff000UL
+
+#define CMDQ_PRI_0_SSID_SHIFT 12
+#define CMDQ_PRI_0_SSID_MASK 0xfffffUL
+#define CMDQ_PRI_0_SID_SHIFT 32
+#define CMDQ_PRI_0_SID_MASK 0xffffffffUL
+#define CMDQ_PRI_1_GRPID_SHIFT 0
+#define CMDQ_PRI_1_GRPID_MASK 0x1ffUL
+#define CMDQ_PRI_1_RESP_SHIFT 12
+#define CMDQ_PRI_1_RESP_DENY (0UL << CMDQ_PRI_1_RESP_SHIFT)
+#define CMDQ_PRI_1_RESP_FAIL (1UL << CMDQ_PRI_1_RESP_SHIFT)
+#define CMDQ_PRI_1_RESP_SUCC (2UL << CMDQ_PRI_1_RESP_SHIFT)
+
+#define CMDQ_SYNC_0_CS_SHIFT 12
+#define CMDQ_SYNC_0_CS_NONE (0UL << CMDQ_SYNC_0_CS_SHIFT)
+#define CMDQ_SYNC_0_CS_SEV (2UL << CMDQ_SYNC_0_CS_SHIFT)
+
+/* Event queue */
+#define EVTQ_ENT_DWORDS 4
+#define EVTQ_MAX_SZ_SHIFT 7
+
+#define EVTQ_0_ID_SHIFT 0
+#define EVTQ_0_ID_MASK 0xffUL
+
+/* PRI queue */
+#define PRIQ_ENT_DWORDS 2
+#define PRIQ_MAX_SZ_SHIFT 8
+
+#define PRIQ_0_SID_SHIFT 0
+#define PRIQ_0_SID_MASK 0xffffffffUL
+#define PRIQ_0_SSID_SHIFT 32
+#define PRIQ_0_SSID_MASK 0xfffffUL
+#define PRIQ_0_PERM_PRIV (1UL << 58)
+#define PRIQ_0_PERM_EXEC (1UL << 59)
+#define PRIQ_0_PERM_READ (1UL << 60)
+#define PRIQ_0_PERM_WRITE (1UL << 61)
+#define PRIQ_0_PRG_LAST (1UL << 62)
+#define PRIQ_0_SSID_V (1UL << 63)
+
+#define PRIQ_1_PRG_IDX_SHIFT 0
+#define PRIQ_1_PRG_IDX_MASK 0x1ffUL
+#define PRIQ_1_ADDR_SHIFT 12
+#define PRIQ_1_ADDR_MASK 0xfffffffffffffUL
+
+/* High-level queue structures */
+#define ARM_SMMU_POLL_TIMEOUT_US 100
+#define ARM_SMMU_CMDQ_DRAIN_TIMEOUT_US 1000000 /* 1s! */
+
+#define MSI_IOVA_BASE 0x8000000
+#define MSI_IOVA_LENGTH 0x100000
+
+/* Until ACPICA headers cover IORT rev. C */
+#ifndef ACPI_IORT_SMMU_HISILICON_HI161X
+#define ACPI_IORT_SMMU_HISILICON_HI161X 0x1
+#endif
+
+#ifndef ACPI_IORT_SMMU_V3_CAVIUM_CN99XX
+#define ACPI_IORT_SMMU_V3_CAVIUM_CN99XX 0x2
+#endif
+
+static bool disable_bypass;
+module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_bypass,
+ "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
+
+enum pri_resp {
+ PRI_RESP_DENY,
+ PRI_RESP_FAIL,
+ PRI_RESP_SUCC,
+};
+
+enum arm_smmu_msi_index {
+ EVTQ_MSI_INDEX,
+ GERROR_MSI_INDEX,
+ PRIQ_MSI_INDEX,
+ ARM_SMMU_MAX_MSIS,
+};
+
+static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
+ [EVTQ_MSI_INDEX] = {
+ ARM_SMMU_EVTQ_IRQ_CFG0,
+ ARM_SMMU_EVTQ_IRQ_CFG1,
+ ARM_SMMU_EVTQ_IRQ_CFG2,
+ },
+ [GERROR_MSI_INDEX] = {
+ ARM_SMMU_GERROR_IRQ_CFG0,
+ ARM_SMMU_GERROR_IRQ_CFG1,
+ ARM_SMMU_GERROR_IRQ_CFG2,
+ },
+ [PRIQ_MSI_INDEX] = {
+ ARM_SMMU_PRIQ_IRQ_CFG0,
+ ARM_SMMU_PRIQ_IRQ_CFG1,
+ ARM_SMMU_PRIQ_IRQ_CFG2,
+ },
+};
+
+struct arm_smmu_cmdq_ent {
+ /* Common fields */
+ u8 opcode;
+ bool substream_valid;
+
+ /* Command-specific fields */
+ union {
+ #define CMDQ_OP_PREFETCH_CFG 0x1
+ struct {
+ u32 sid;
+ u8 size;
+ u64 addr;
+ } prefetch;
+
+ #define CMDQ_OP_CFGI_STE 0x3
+ #define CMDQ_OP_CFGI_ALL 0x4
+ struct {
+ u32 sid;
+ union {
+ bool leaf;
+ u8 span;
+ };
+ } cfgi;
+
+ #define CMDQ_OP_TLBI_NH_ASID 0x11
+ #define CMDQ_OP_TLBI_NH_VA 0x12
+ #define CMDQ_OP_TLBI_EL2_ALL 0x20
+ #define CMDQ_OP_TLBI_S12_VMALL 0x28
+ #define CMDQ_OP_TLBI_S2_IPA 0x2a
+ #define CMDQ_OP_TLBI_NSNH_ALL 0x30
+ struct {
+ u16 asid;
+ u16 vmid;
+ bool leaf;
+ u64 addr;
+ } tlbi;
+
+ #define CMDQ_OP_PRI_RESP 0x41
+ struct {
+ u32 sid;
+ u32 ssid;
+ u16 grpid;
+ enum pri_resp resp;
+ } pri;
+
+ #define CMDQ_OP_CMD_SYNC 0x46
+ };
+};
+
+struct arm_smmu_queue {
+ int irq; /* Wired interrupt */
+
+ __le64 *base;
+ dma_addr_t base_dma;
+ u64 q_base;
+
+ size_t ent_dwords;
+ u32 max_n_shift;
+ u32 prod;
+ u32 cons;
+
+ u32 __iomem *prod_reg;
+ u32 __iomem *cons_reg;
+};
+
+struct arm_smmu_cmdq {
+ struct arm_smmu_queue q;
+ spinlock_t lock;
+};
+
+struct arm_smmu_evtq {
+ struct arm_smmu_queue q;
+ u32 max_stalls;
+};
+
+struct arm_smmu_priq {
+ struct arm_smmu_queue q;
+};
+
+/* High-level stream table and context descriptor structures */
+struct arm_smmu_strtab_l1_desc {
+ u8 span;
+
+ __le64 *l2ptr;
+ dma_addr_t l2ptr_dma;
+};
+
+struct arm_smmu_s1_cfg {
+ __le64 *cdptr;
+ dma_addr_t cdptr_dma;
+
+ struct arm_smmu_ctx_desc {
+ u16 asid;
+ u64 ttbr;
+ u64 tcr;
+ u64 mair;
+ } cd;
+};
+
+struct arm_smmu_s2_cfg {
+ u16 vmid;
+ u64 vttbr;
+ u64 vtcr;
+};
+
+struct arm_smmu_strtab_ent {
+ /*
+ * An STE is "assigned" if the master emitting the corresponding SID
+ * is attached to a domain. The behaviour of an unassigned STE is
+ * determined by the disable_bypass parameter, whereas an assigned
+ * STE behaves according to s1_cfg/s2_cfg, which themselves are
+ * configured according to the domain type.
+ */
+ bool assigned;
+ struct arm_smmu_s1_cfg *s1_cfg;
+ struct arm_smmu_s2_cfg *s2_cfg;
+};
+
+struct arm_smmu_strtab_cfg {
+ __le64 *strtab;
+ dma_addr_t strtab_dma;
+ struct arm_smmu_strtab_l1_desc *l1_desc;
+ unsigned int num_l1_ents;
+
+ u64 strtab_base;
+ u32 strtab_base_cfg;
+};
+
+/* An SMMUv3 instance */
+struct arm_smmu_device {
+ struct device *dev;
+ void __iomem *base;
+
+#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0)
+#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1)
+#define ARM_SMMU_FEAT_TT_LE (1 << 2)
+#define ARM_SMMU_FEAT_TT_BE (1 << 3)
+#define ARM_SMMU_FEAT_PRI (1 << 4)
+#define ARM_SMMU_FEAT_ATS (1 << 5)
+#define ARM_SMMU_FEAT_SEV (1 << 6)
+#define ARM_SMMU_FEAT_MSI (1 << 7)
+#define ARM_SMMU_FEAT_COHERENCY (1 << 8)
+#define ARM_SMMU_FEAT_TRANS_S1 (1 << 9)
+#define ARM_SMMU_FEAT_TRANS_S2 (1 << 10)
+#define ARM_SMMU_FEAT_STALLS (1 << 11)
+#define ARM_SMMU_FEAT_HYP (1 << 12)
+ u32 features;
+
+#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
+#define ARM_SMMU_OPT_PAGE0_REGS_ONLY (1 << 1)
+ u32 options;
+
+ struct arm_smmu_cmdq cmdq;
+ struct arm_smmu_evtq evtq;
+ struct arm_smmu_priq priq;
+
+ int gerr_irq;
+ int combined_irq;
+
+ unsigned long ias; /* IPA */
+ unsigned long oas; /* PA */
+ unsigned long pgsize_bitmap;
+
+#define ARM_SMMU_MAX_ASIDS (1 << 16)
+ unsigned int asid_bits;
+ DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
+
+#define ARM_SMMU_MAX_VMIDS (1 << 16)
+ unsigned int vmid_bits;
+ DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
+
+ unsigned int ssid_bits;
+ unsigned int sid_bits;
+
+ struct arm_smmu_strtab_cfg strtab_cfg;
+
+ /* IOMMU core code handle */
+ struct iommu_device iommu;
+};
+
+/* SMMU private data for each master */
+struct arm_smmu_master_data {
+ struct arm_smmu_device *smmu;
+ struct arm_smmu_strtab_ent ste;
+};
+
+/* SMMU private data for an IOMMU domain */
+enum arm_smmu_domain_stage {
+ ARM_SMMU_DOMAIN_S1 = 0,
+ ARM_SMMU_DOMAIN_S2,
+ ARM_SMMU_DOMAIN_NESTED,
+ ARM_SMMU_DOMAIN_BYPASS,
+};
+
+struct arm_smmu_domain {
+ struct arm_smmu_device *smmu;
+ struct mutex init_mutex; /* Protects smmu pointer */
+
+ struct io_pgtable_ops *pgtbl_ops;
+
+ enum arm_smmu_domain_stage stage;
+ union {
+ struct arm_smmu_s1_cfg s1_cfg;
+ struct arm_smmu_s2_cfg s2_cfg;
+ };
+
+ struct iommu_domain domain;
+};
+
+struct arm_smmu_option_prop {
+ u32 opt;
+ const char *prop;
+};
+
+static struct arm_smmu_option_prop arm_smmu_options[] = {
+ { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
+ { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
+ { 0, NULL},
+};
+
+static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
+ struct arm_smmu_device *smmu)
+{
+ if ((offset > SZ_64K) &&
+ (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY))
+ offset -= SZ_64K;
+
+ return smmu->base + offset;
+}
+
+static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct arm_smmu_domain, domain);
+}
+
+static void parse_driver_options(struct arm_smmu_device *smmu)
+{
+ int i = 0;
+
+ do {
+ if (of_property_read_bool(smmu->dev->of_node,
+ arm_smmu_options[i].prop)) {
+ smmu->options |= arm_smmu_options[i].opt;
+ dev_notice(smmu->dev, "option %s\n",
+ arm_smmu_options[i].prop);
+ }
+ } while (arm_smmu_options[++i].opt);
+}
+
+/* Low-level queue manipulation functions */
+static bool queue_full(struct arm_smmu_queue *q)
+{
+ return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+ Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
+}
+
+static bool queue_empty(struct arm_smmu_queue *q)
+{
+ return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+ Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
+}
+
+static void queue_sync_cons(struct arm_smmu_queue *q)
+{
+ q->cons = readl_relaxed(q->cons_reg);
+}
+
+static void queue_inc_cons(struct arm_smmu_queue *q)
+{
+ u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
+
+ q->cons = Q_OVF(q, q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+ writel(q->cons, q->cons_reg);
+}
+
+static int queue_sync_prod(struct arm_smmu_queue *q)
+{
+ int ret = 0;
+ u32 prod = readl_relaxed(q->prod_reg);
+
+ if (Q_OVF(q, prod) != Q_OVF(q, q->prod))
+ ret = -EOVERFLOW;
+
+ q->prod = prod;
+ return ret;
+}
+
+static void queue_inc_prod(struct arm_smmu_queue *q)
+{
+ u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
+
+ q->prod = Q_OVF(q, q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+ writel(q->prod, q->prod_reg);
+}
+
+/*
+ * Wait for the SMMU to consume items. If drain is true, wait until the queue
+ * is empty. Otherwise, wait until there is at least one free slot.
+ */
+static int queue_poll_cons(struct arm_smmu_queue *q, bool drain, bool wfe)
+{
+ ktime_t timeout;
+ unsigned int delay = 1;
+
+ /* Wait longer if it's queue drain */
+ timeout = ktime_add_us(ktime_get(), drain ?
+ ARM_SMMU_CMDQ_DRAIN_TIMEOUT_US :
+ ARM_SMMU_POLL_TIMEOUT_US);
+
+ while (queue_sync_cons(q), (drain ? !queue_empty(q) : queue_full(q))) {
+ if (ktime_compare(ktime_get(), timeout) > 0)
+ return -ETIMEDOUT;
+
+ if (wfe) {
+ wfe();
+ } else {
+ cpu_relax();
+ udelay(delay);
+ delay *= 2;
+ }
+ }
+
+ return 0;
+}
+
+static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
+{
+ int i;
+
+ for (i = 0; i < n_dwords; ++i)
+ *dst++ = cpu_to_le64(*src++);
+}
+
+static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+ if (queue_full(q))
+ return -ENOSPC;
+
+ queue_write(Q_ENT(q, q->prod), ent, q->ent_dwords);
+ queue_inc_prod(q);
+ return 0;
+}
+
+static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
+{
+ int i;
+
+ for (i = 0; i < n_dwords; ++i)
+ *dst++ = le64_to_cpu(*src++);
+}
+
+static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+ if (queue_empty(q))
+ return -EAGAIN;
+
+ queue_read(ent, Q_ENT(q, q->cons), q->ent_dwords);
+ queue_inc_cons(q);
+ return 0;
+}
+
+/* High-level queue accessors */
+static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
+{
+ memset(cmd, 0, CMDQ_ENT_DWORDS << 3);
+ cmd[0] |= (ent->opcode & CMDQ_0_OP_MASK) << CMDQ_0_OP_SHIFT;
+
+ switch (ent->opcode) {
+ case CMDQ_OP_TLBI_EL2_ALL:
+ case CMDQ_OP_TLBI_NSNH_ALL:
+ break;
+ case CMDQ_OP_PREFETCH_CFG:
+ cmd[0] |= (u64)ent->prefetch.sid << CMDQ_PREFETCH_0_SID_SHIFT;
+ cmd[1] |= ent->prefetch.size << CMDQ_PREFETCH_1_SIZE_SHIFT;
+ cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
+ break;
+ case CMDQ_OP_CFGI_STE:
+ cmd[0] |= (u64)ent->cfgi.sid << CMDQ_CFGI_0_SID_SHIFT;
+ cmd[1] |= ent->cfgi.leaf ? CMDQ_CFGI_1_LEAF : 0;
+ break;
+ case CMDQ_OP_CFGI_ALL:
+ /* Cover the entire SID range */
+ cmd[1] |= CMDQ_CFGI_1_RANGE_MASK << CMDQ_CFGI_1_RANGE_SHIFT;
+ break;
+ case CMDQ_OP_TLBI_NH_VA:
+ cmd[0] |= (u64)ent->tlbi.asid << CMDQ_TLBI_0_ASID_SHIFT;
+ cmd[1] |= ent->tlbi.leaf ? CMDQ_TLBI_1_LEAF : 0;
+ cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
+ break;
+ case CMDQ_OP_TLBI_S2_IPA:
+ cmd[0] |= (u64)ent->tlbi.vmid << CMDQ_TLBI_0_VMID_SHIFT;
+ cmd[1] |= ent->tlbi.leaf ? CMDQ_TLBI_1_LEAF : 0;
+ cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
+ break;
+ case CMDQ_OP_TLBI_NH_ASID:
+ cmd[0] |= (u64)ent->tlbi.asid << CMDQ_TLBI_0_ASID_SHIFT;
+ /* Fallthrough */
+ case CMDQ_OP_TLBI_S12_VMALL:
+ cmd[0] |= (u64)ent->tlbi.vmid << CMDQ_TLBI_0_VMID_SHIFT;
+ break;
+ case CMDQ_OP_PRI_RESP:
+ cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
+ cmd[0] |= ent->pri.ssid << CMDQ_PRI_0_SSID_SHIFT;
+ cmd[0] |= (u64)ent->pri.sid << CMDQ_PRI_0_SID_SHIFT;
+ cmd[1] |= ent->pri.grpid << CMDQ_PRI_1_GRPID_SHIFT;
+ switch (ent->pri.resp) {
+ case PRI_RESP_DENY:
+ cmd[1] |= CMDQ_PRI_1_RESP_DENY;
+ break;
+ case PRI_RESP_FAIL:
+ cmd[1] |= CMDQ_PRI_1_RESP_FAIL;
+ break;
+ case PRI_RESP_SUCC:
+ cmd[1] |= CMDQ_PRI_1_RESP_SUCC;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case CMDQ_OP_CMD_SYNC:
+ cmd[0] |= CMDQ_SYNC_0_CS_SEV;
+ break;
+ default:
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
+static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
+{
+ static const char *cerror_str[] = {
+ [CMDQ_ERR_CERROR_NONE_IDX] = "No error",
+ [CMDQ_ERR_CERROR_ILL_IDX] = "Illegal command",
+ [CMDQ_ERR_CERROR_ABT_IDX] = "Abort on command fetch",
+ };
+
+ int i;
+ u64 cmd[CMDQ_ENT_DWORDS];
+ struct arm_smmu_queue *q = &smmu->cmdq.q;
+ u32 cons = readl_relaxed(q->cons_reg);
+ u32 idx = cons >> CMDQ_ERR_SHIFT & CMDQ_ERR_MASK;
+ struct arm_smmu_cmdq_ent cmd_sync = {
+ .opcode = CMDQ_OP_CMD_SYNC,
+ };
+
+ dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
+ idx < ARRAY_SIZE(cerror_str) ? cerror_str[idx] : "Unknown");
+
+ switch (idx) {
+ case CMDQ_ERR_CERROR_ABT_IDX:
+ dev_err(smmu->dev, "retrying command fetch\n");
+ case CMDQ_ERR_CERROR_NONE_IDX:
+ return;
+ case CMDQ_ERR_CERROR_ILL_IDX:
+ /* Fallthrough */
+ default:
+ break;
+ }
+
+ /*
+ * We may have concurrent producers, so we need to be careful
+ * not to touch any of the shadow cmdq state.
+ */
+ queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
+ dev_err(smmu->dev, "skipping command in error state:\n");
+ for (i = 0; i < ARRAY_SIZE(cmd); ++i)
+ dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
+
+ /* Convert the erroneous command into a CMD_SYNC */
+ if (arm_smmu_cmdq_build_cmd(cmd, &cmd_sync)) {
+ dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
+ return;
+ }
+
+ queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
+}
+
+static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+ struct arm_smmu_cmdq_ent *ent)
+{
+ u64 cmd[CMDQ_ENT_DWORDS];
+ unsigned long flags;
+ bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+ struct arm_smmu_queue *q = &smmu->cmdq.q;
+
+ if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+ dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+ ent->opcode);
+ return;
+ }
+
+ spin_lock_irqsave(&smmu->cmdq.lock, flags);
+ while (queue_insert_raw(q, cmd) == -ENOSPC) {
+ if (queue_poll_cons(q, false, wfe))
+ dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
+ }
+
+ if (ent->opcode == CMDQ_OP_CMD_SYNC && queue_poll_cons(q, true, wfe))
+ dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
+ spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
+}
+
+/* Context descriptor manipulation functions */
+static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
+{
+ u64 val = 0;
+
+ /* Repack the TCR. Just care about TTBR0 for now */
+ val |= ARM_SMMU_TCR2CD(tcr, T0SZ);
+ val |= ARM_SMMU_TCR2CD(tcr, TG0);
+ val |= ARM_SMMU_TCR2CD(tcr, IRGN0);
+ val |= ARM_SMMU_TCR2CD(tcr, ORGN0);
+ val |= ARM_SMMU_TCR2CD(tcr, SH0);
+ val |= ARM_SMMU_TCR2CD(tcr, EPD0);
+ val |= ARM_SMMU_TCR2CD(tcr, EPD1);
+ val |= ARM_SMMU_TCR2CD(tcr, IPS);
+ val |= ARM_SMMU_TCR2CD(tcr, TBI0);
+
+ return val;
+}
+
+static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
+ struct arm_smmu_s1_cfg *cfg)
+{
+ u64 val;
+
+ /*
+ * We don't need to issue any invalidation here, as we'll invalidate
+ * the STE when installing the new entry anyway.
+ */
+ val = arm_smmu_cpu_tcr_to_cd(cfg->cd.tcr) |
+#ifdef __BIG_ENDIAN
+ CTXDESC_CD_0_ENDI |
+#endif
+ CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET_PRIVATE |
+ CTXDESC_CD_0_AA64 | (u64)cfg->cd.asid << CTXDESC_CD_0_ASID_SHIFT |
+ CTXDESC_CD_0_V;
+ cfg->cdptr[0] = cpu_to_le64(val);
+
+ val = cfg->cd.ttbr & CTXDESC_CD_1_TTB0_MASK << CTXDESC_CD_1_TTB0_SHIFT;
+ cfg->cdptr[1] = cpu_to_le64(val);
+
+ cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
+}
+
+/* Stream table manipulation functions */
+static void
+arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
+{
+ u64 val = 0;
+
+ val |= (desc->span & STRTAB_L1_DESC_SPAN_MASK)
+ << STRTAB_L1_DESC_SPAN_SHIFT;
+ val |= desc->l2ptr_dma &
+ STRTAB_L1_DESC_L2PTR_MASK << STRTAB_L1_DESC_L2PTR_SHIFT;
+
+ *dst = cpu_to_le64(val);
+}
+
+static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_CFGI_STE,
+ .cfgi = {
+ .sid = sid,
+ .leaf = true,
+ },
+ };
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+}
+
+static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
+ __le64 *dst, struct arm_smmu_strtab_ent *ste)
+{
+ /*
+ * This is hideously complicated, but we only really care about
+ * three cases at the moment:
+ *
+ * 1. Invalid (all zero) -> bypass/fault (init)
+ * 2. Bypass/fault -> translation/bypass (attach)
+ * 3. Translation/bypass -> bypass/fault (detach)
+ *
+ * Given that we can't update the STE atomically and the SMMU
+ * doesn't read the thing in a defined order, that leaves us
+ * with the following maintenance requirements:
+ *
+ * 1. Update Config, return (init time STEs aren't live)
+ * 2. Write everything apart from dword 0, sync, write dword 0, sync
+ * 3. Update Config, sync
+ */
+ u64 val = le64_to_cpu(dst[0]);
+ bool ste_live = false;
+ struct arm_smmu_cmdq_ent prefetch_cmd = {
+ .opcode = CMDQ_OP_PREFETCH_CFG,
+ .prefetch = {
+ .sid = sid,
+ },
+ };
+
+ if (val & STRTAB_STE_0_V) {
+ u64 cfg;
+
+ cfg = val & STRTAB_STE_0_CFG_MASK << STRTAB_STE_0_CFG_SHIFT;
+ switch (cfg) {
+ case STRTAB_STE_0_CFG_BYPASS:
+ break;
+ case STRTAB_STE_0_CFG_S1_TRANS:
+ case STRTAB_STE_0_CFG_S2_TRANS:
+ ste_live = true;
+ break;
+ case STRTAB_STE_0_CFG_ABORT:
+ if (disable_bypass)
+ break;
+ default:
+ BUG(); /* STE corruption */
+ }
+ }
+
+ /* Nuke the existing STE_0 value, as we're going to rewrite it */
+ val = STRTAB_STE_0_V;
+
+ /* Bypass/fault */
+ if (!ste->assigned || !(ste->s1_cfg || ste->s2_cfg)) {
+ if (!ste->assigned && disable_bypass)
+ val |= STRTAB_STE_0_CFG_ABORT;
+ else
+ val |= STRTAB_STE_0_CFG_BYPASS;
+
+ dst[0] = cpu_to_le64(val);
+ dst[1] = cpu_to_le64(STRTAB_STE_1_SHCFG_INCOMING
+ << STRTAB_STE_1_SHCFG_SHIFT);
+ dst[2] = 0; /* Nuke the VMID */
+ if (ste_live)
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+ return;
+ }
+
+ if (ste->s1_cfg) {
+ BUG_ON(ste_live);
+ dst[1] = cpu_to_le64(
+ STRTAB_STE_1_S1C_CACHE_WBRA
+ << STRTAB_STE_1_S1CIR_SHIFT |
+ STRTAB_STE_1_S1C_CACHE_WBRA
+ << STRTAB_STE_1_S1COR_SHIFT |
+ STRTAB_STE_1_S1C_SH_ISH << STRTAB_STE_1_S1CSH_SHIFT |
+#ifdef CONFIG_PCI_ATS
+ STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT |
+#endif
+ STRTAB_STE_1_STRW_NSEL1 << STRTAB_STE_1_STRW_SHIFT);
+
+ if (smmu->features & ARM_SMMU_FEAT_STALLS)
+ dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
+
+ val |= (ste->s1_cfg->cdptr_dma & STRTAB_STE_0_S1CTXPTR_MASK
+ << STRTAB_STE_0_S1CTXPTR_SHIFT) |
+ STRTAB_STE_0_CFG_S1_TRANS;
+ }
+
+ if (ste->s2_cfg) {
+ BUG_ON(ste_live);
+ dst[2] = cpu_to_le64(
+ ste->s2_cfg->vmid << STRTAB_STE_2_S2VMID_SHIFT |
+ (ste->s2_cfg->vtcr & STRTAB_STE_2_VTCR_MASK)
+ << STRTAB_STE_2_VTCR_SHIFT |
+#ifdef __BIG_ENDIAN
+ STRTAB_STE_2_S2ENDI |
+#endif
+ STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
+ STRTAB_STE_2_S2R);
+
+ dst[3] = cpu_to_le64(ste->s2_cfg->vttbr &
+ STRTAB_STE_3_S2TTB_MASK << STRTAB_STE_3_S2TTB_SHIFT);
+
+ val |= STRTAB_STE_0_CFG_S2_TRANS;
+ }
+
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+ dst[0] = cpu_to_le64(val);
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+
+ /* It's likely that we'll want to use the new STE soon */
+ if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
+ arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+}
+
+static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
+{
+ unsigned int i;
+ struct arm_smmu_strtab_ent ste = { .assigned = false };
+
+ for (i = 0; i < nent; ++i) {
+ arm_smmu_write_strtab_ent(NULL, -1, strtab, &ste);
+ strtab += STRTAB_STE_DWORDS;
+ }
+}
+
+static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
+{
+ size_t size;
+ void *strtab;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+ struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
+
+ if (desc->l2ptr)
+ return 0;
+
+ size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
+ strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
+
+ desc->span = STRTAB_SPLIT + 1;
+ desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!desc->l2ptr) {
+ dev_err(smmu->dev,
+ "failed to allocate l2 stream table for SID %u\n",
+ sid);
+ return -ENOMEM;
+ }
+
+ arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
+ arm_smmu_write_strtab_l1_desc(strtab, desc);
+ return 0;
+}
+
+/* IRQ and event handlers */
+static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+{
+ int i;
+ struct arm_smmu_device *smmu = dev;
+ struct arm_smmu_queue *q = &smmu->evtq.q;
+ u64 evt[EVTQ_ENT_DWORDS];
+
+ do {
+ while (!queue_remove_raw(q, evt)) {
+ u8 id = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK;
+
+ dev_info(smmu->dev, "event 0x%02x received:\n", id);
+ for (i = 0; i < ARRAY_SIZE(evt); ++i)
+ dev_info(smmu->dev, "\t0x%016llx\n",
+ (unsigned long long)evt[i]);
+
+ }
+
+ /*
+ * Not much we can do on overflow, so scream and pretend we're
+ * trying harder.
+ */
+ if (queue_sync_prod(q) == -EOVERFLOW)
+ dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
+ } while (!queue_empty(q));
+
+ /* Sync our overflow flag, as we believe we're up to speed */
+ q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
+ return IRQ_HANDLED;
+}
+
+static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
+{
+ u32 sid, ssid;
+ u16 grpid;
+ bool ssv, last;
+
+ sid = evt[0] >> PRIQ_0_SID_SHIFT & PRIQ_0_SID_MASK;
+ ssv = evt[0] & PRIQ_0_SSID_V;
+ ssid = ssv ? evt[0] >> PRIQ_0_SSID_SHIFT & PRIQ_0_SSID_MASK : 0;
+ last = evt[0] & PRIQ_0_PRG_LAST;
+ grpid = evt[1] >> PRIQ_1_PRG_IDX_SHIFT & PRIQ_1_PRG_IDX_MASK;
+
+ dev_info(smmu->dev, "unexpected PRI request received:\n");
+ dev_info(smmu->dev,
+ "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
+ sid, ssid, grpid, last ? "L" : "",
+ evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
+ evt[0] & PRIQ_0_PERM_READ ? "R" : "",
+ evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
+ evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
+ evt[1] & PRIQ_1_ADDR_MASK << PRIQ_1_ADDR_SHIFT);
+
+ if (last) {
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_PRI_RESP,
+ .substream_valid = ssv,
+ .pri = {
+ .sid = sid,
+ .ssid = ssid,
+ .grpid = grpid,
+ .resp = PRI_RESP_DENY,
+ },
+ };
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ }
+}
+
+static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+{
+ struct arm_smmu_device *smmu = dev;
+ struct arm_smmu_queue *q = &smmu->priq.q;
+ u64 evt[PRIQ_ENT_DWORDS];
+
+ do {
+ while (!queue_remove_raw(q, evt))
+ arm_smmu_handle_ppr(smmu, evt);
+
+ if (queue_sync_prod(q) == -EOVERFLOW)
+ dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
+ } while (!queue_empty(q));
+
+ /* Sync our overflow flag, as we believe we're up to speed */
+ q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_cmdq_sync_handler(int irq, void *dev)
+{
+ /* We don't actually use CMD_SYNC interrupts for anything */
+ return IRQ_HANDLED;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
+
+static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
+{
+ u32 gerror, gerrorn, active;
+ struct arm_smmu_device *smmu = dev;
+
+ gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
+ gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
+
+ active = gerror ^ gerrorn;
+ if (!(active & GERROR_ERR_MASK))
+ return IRQ_NONE; /* No errors pending */
+
+ dev_warn(smmu->dev,
+ "unexpected global error reported (0x%08x), this could be serious\n",
+ active);
+
+ if (active & GERROR_SFM_ERR) {
+ dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
+ arm_smmu_device_disable(smmu);
+ }
+
+ if (active & GERROR_MSI_GERROR_ABT_ERR)
+ dev_warn(smmu->dev, "GERROR MSI write aborted\n");
+
+ if (active & GERROR_MSI_PRIQ_ABT_ERR)
+ dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
+
+ if (active & GERROR_MSI_EVTQ_ABT_ERR)
+ dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
+
+ if (active & GERROR_MSI_CMDQ_ABT_ERR) {
+ dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
+ arm_smmu_cmdq_sync_handler(irq, smmu->dev);
+ }
+
+ if (active & GERROR_PRIQ_ABT_ERR)
+ dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
+
+ if (active & GERROR_EVTQ_ABT_ERR)
+ dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
+
+ if (active & GERROR_CMDQ_ERR)
+ arm_smmu_cmdq_skip_err(smmu);
+
+ writel(gerror, smmu->base + ARM_SMMU_GERRORN);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
+{
+ struct arm_smmu_device *smmu = dev;
+
+ arm_smmu_evtq_thread(irq, dev);
+ if (smmu->features & ARM_SMMU_FEAT_PRI)
+ arm_smmu_priq_thread(irq, dev);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
+{
+ arm_smmu_gerror_handler(irq, dev);
+ arm_smmu_cmdq_sync_handler(irq, dev);
+ return IRQ_WAKE_THREAD;
+}
+
+/* IO_PGTABLE API */
+static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
+{
+ struct arm_smmu_cmdq_ent cmd;
+
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+}
+
+static void arm_smmu_tlb_sync(void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ __arm_smmu_tlb_sync(smmu_domain->smmu);
+}
+
+static void arm_smmu_tlb_inv_context(void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_cmdq_ent cmd;
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ cmd.opcode = CMDQ_OP_TLBI_NH_ASID;
+ cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid;
+ cmd.tlbi.vmid = 0;
+ } else {
+ cmd.opcode = CMDQ_OP_TLBI_S12_VMALL;
+ cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
+ }
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ __arm_smmu_tlb_sync(smmu);
+}
+
+static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ size_t granule, bool leaf, void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_cmdq_ent cmd = {
+ .tlbi = {
+ .leaf = leaf,
+ .addr = iova,
+ },
+ };
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ cmd.opcode = CMDQ_OP_TLBI_NH_VA;
+ cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid;
+ } else {
+ cmd.opcode = CMDQ_OP_TLBI_S2_IPA;
+ cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
+ }
+
+ do {
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.tlbi.addr += granule;
+ } while (size -= granule);
+}
+
+static const struct iommu_gather_ops arm_smmu_gather_ops = {
+ .tlb_flush_all = arm_smmu_tlb_inv_context,
+ .tlb_add_flush = arm_smmu_tlb_inv_range_nosync,
+ .tlb_sync = arm_smmu_tlb_sync,
+};
+
+/* IOMMU API */
+static bool arm_smmu_capable(enum iommu_cap cap)
+{
+ switch (cap) {
+ case IOMMU_CAP_CACHE_COHERENCY:
+ return true;
+ case IOMMU_CAP_NOEXEC:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
+{
+ struct arm_smmu_domain *smmu_domain;
+
+ if (type != IOMMU_DOMAIN_UNMANAGED &&
+ type != IOMMU_DOMAIN_DMA &&
+ type != IOMMU_DOMAIN_IDENTITY)
+ return NULL;
+
+ /*
+ * Allocate the domain and initialise some of its data structures.
+ * We can't really do anything meaningful until we've added a
+ * master.
+ */
+ smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
+ if (!smmu_domain)
+ return NULL;
+
+ if (type == IOMMU_DOMAIN_DMA &&
+ iommu_get_dma_cookie(&smmu_domain->domain)) {
+ kfree(smmu_domain);
+ return NULL;
+ }
+
+ mutex_init(&smmu_domain->init_mutex);
+ return &smmu_domain->domain;
+}
+
+static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
+{
+ int idx, size = 1 << span;
+
+ do {
+ idx = find_first_zero_bit(map, size);
+ if (idx == size)
+ return -ENOSPC;
+ } while (test_and_set_bit(idx, map));
+
+ return idx;
+}
+
+static void arm_smmu_bitmap_free(unsigned long *map, int idx)
+{
+ clear_bit(idx, map);
+}
+
+static void arm_smmu_domain_free(struct iommu_domain *domain)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+ iommu_put_dma_cookie(domain);
+ free_io_pgtable_ops(smmu_domain->pgtbl_ops);
+
+ /* Free the CD and ASID, if we allocated them */
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+ if (cfg->cdptr) {
+ dmam_free_coherent(smmu_domain->smmu->dev,
+ CTXDESC_CD_DWORDS << 3,
+ cfg->cdptr,
+ cfg->cdptr_dma);
+
+ arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
+ }
+ } else {
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+ if (cfg->vmid)
+ arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+ }
+
+ kfree(smmu_domain);
+}
+
+static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+ struct io_pgtable_cfg *pgtbl_cfg)
+{
+ int ret;
+ int asid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+ asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
+ if (asid < 0)
+ return asid;
+
+ cfg->cdptr = dmam_alloc_coherent(smmu->dev, CTXDESC_CD_DWORDS << 3,
+ &cfg->cdptr_dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!cfg->cdptr) {
+ dev_warn(smmu->dev, "failed to allocate context descriptor\n");
+ ret = -ENOMEM;
+ goto out_free_asid;
+ }
+
+ cfg->cd.asid = (u16)asid;
+ cfg->cd.ttbr = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
+ cfg->cd.tcr = pgtbl_cfg->arm_lpae_s1_cfg.tcr;
+ cfg->cd.mair = pgtbl_cfg->arm_lpae_s1_cfg.mair[0];
+ return 0;
+
+out_free_asid:
+ arm_smmu_bitmap_free(smmu->asid_map, asid);
+ return ret;
+}
+
+static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+ struct io_pgtable_cfg *pgtbl_cfg)
+{
+ int vmid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+
+ vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
+ if (vmid < 0)
+ return vmid;
+
+ cfg->vmid = (u16)vmid;
+ cfg->vttbr = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
+ cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
+ return 0;
+}
+
+static int arm_smmu_domain_finalise(struct iommu_domain *domain)
+{
+ int ret;
+ unsigned long ias, oas;
+ enum io_pgtable_fmt fmt;
+ struct io_pgtable_cfg pgtbl_cfg;
+ struct io_pgtable_ops *pgtbl_ops;
+ int (*finalise_stage_fn)(struct arm_smmu_domain *,
+ struct io_pgtable_cfg *);
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+ if (domain->type == IOMMU_DOMAIN_IDENTITY) {
+ smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
+ return 0;
+ }
+
+ /* Restrict the stage to what we can actually support */
+ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
+ smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
+ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
+ smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+
+ switch (smmu_domain->stage) {
+ case ARM_SMMU_DOMAIN_S1:
+ ias = VA_BITS;
+ oas = smmu->ias;
+ fmt = ARM_64_LPAE_S1;
+ finalise_stage_fn = arm_smmu_domain_finalise_s1;
+ break;
+ case ARM_SMMU_DOMAIN_NESTED:
+ case ARM_SMMU_DOMAIN_S2:
+ ias = smmu->ias;
+ oas = smmu->oas;
+ fmt = ARM_64_LPAE_S2;
+ finalise_stage_fn = arm_smmu_domain_finalise_s2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ pgtbl_cfg = (struct io_pgtable_cfg) {
+ .pgsize_bitmap = smmu->pgsize_bitmap,
+ .ias = ias,
+ .oas = oas,
+ .tlb = &arm_smmu_gather_ops,
+ .iommu_dev = smmu->dev,
+ };
+
+ if (smmu->features & ARM_SMMU_FEAT_COHERENCY)
+ pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_NO_DMA;
+
+ pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
+ if (!pgtbl_ops)
+ return -ENOMEM;
+
+ domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
+ domain->geometry.aperture_end = (1UL << ias) - 1;
+ domain->geometry.force_aperture = true;
+ smmu_domain->pgtbl_ops = pgtbl_ops;
+
+ ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
+ if (ret < 0)
+ free_io_pgtable_ops(pgtbl_ops);
+
+ return ret;
+}
+
+static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+ __le64 *step;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+ struct arm_smmu_strtab_l1_desc *l1_desc;
+ int idx;
+
+ /* Two-level walk */
+ idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
+ l1_desc = &cfg->l1_desc[idx];
+ idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
+ step = &l1_desc->l2ptr[idx];
+ } else {
+ /* Simple linear lookup */
+ step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
+ }
+
+ return step;
+}
+
+static void arm_smmu_install_ste_for_dev(struct iommu_fwspec *fwspec)
+{
+ int i;
+ struct arm_smmu_master_data *master = fwspec->iommu_priv;
+ struct arm_smmu_device *smmu = master->smmu;
+
+ for (i = 0; i < fwspec->num_ids; ++i) {
+ u32 sid = fwspec->ids[i];
+ __le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
+
+ arm_smmu_write_strtab_ent(smmu, sid, step, &master->ste);
+ }
+}
+
+static void arm_smmu_detach_dev(struct device *dev)
+{
+ struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
+
+ master->ste.assigned = false;
+ arm_smmu_install_ste_for_dev(dev->iommu_fwspec);
+}
+
+static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+ int ret = 0;
+ struct arm_smmu_device *smmu;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_master_data *master;
+ struct arm_smmu_strtab_ent *ste;
+
+ if (!dev->iommu_fwspec)
+ return -ENOENT;
+
+ master = dev->iommu_fwspec->iommu_priv;
+ smmu = master->smmu;
+ ste = &master->ste;
+
+ /* Already attached to a different domain? */
+ if (ste->assigned)
+ arm_smmu_detach_dev(dev);
+
+ mutex_lock(&smmu_domain->init_mutex);
+
+ if (!smmu_domain->smmu) {
+ smmu_domain->smmu = smmu;
+ ret = arm_smmu_domain_finalise(domain);
+ if (ret) {
+ smmu_domain->smmu = NULL;
+ goto out_unlock;
+ }
+ } else if (smmu_domain->smmu != smmu) {
+ dev_err(dev,
+ "cannot attach to SMMU %s (upstream of %s)\n",
+ dev_name(smmu_domain->smmu->dev),
+ dev_name(smmu->dev));
+ ret = -ENXIO;
+ goto out_unlock;
+ }
+
+ ste->assigned = true;
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_BYPASS) {
+ ste->s1_cfg = NULL;
+ ste->s2_cfg = NULL;
+ } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ ste->s1_cfg = &smmu_domain->s1_cfg;
+ ste->s2_cfg = NULL;
+ arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
+ } else {
+ ste->s1_cfg = NULL;
+ ste->s2_cfg = &smmu_domain->s2_cfg;
+ }
+
+ arm_smmu_install_ste_for_dev(dev->iommu_fwspec);
+out_unlock:
+ mutex_unlock(&smmu_domain->init_mutex);
+ return ret;
+}
+
+static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
+ phys_addr_t paddr, size_t size, int prot)
+{
+ struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+ if (!ops)
+ return -ENODEV;
+
+ return ops->map(ops, iova, paddr, size, prot);
+}
+
+static size_t
+arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
+{
+ struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+ if (!ops)
+ return 0;
+
+ return ops->unmap(ops, iova, size);
+}
+
+static phys_addr_t
+arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
+{
+ struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+ if (domain->type == IOMMU_DOMAIN_IDENTITY)
+ return iova;
+
+ if (!ops)
+ return 0;
+
+ return ops->iova_to_phys(ops, iova);
+}
+
+static struct platform_driver arm_smmu_driver;
+
+static int arm_smmu_match_node(struct device *dev, void *data)
+{
+ return dev->fwnode == data;
+}
+
+static
+struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+ struct device *dev = driver_find_device(&arm_smmu_driver.driver, NULL,
+ fwnode, arm_smmu_match_node);
+ put_device(dev);
+ return dev ? dev_get_drvdata(dev) : NULL;
+}
+
+static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
+{
+ unsigned long limit = smmu->strtab_cfg.num_l1_ents;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+ limit *= 1UL << STRTAB_SPLIT;
+
+ return sid < limit;
+}
+
+static struct iommu_ops arm_smmu_ops;
+
+static int arm_smmu_add_device(struct device *dev)
+{
+ int i, ret;
+ struct arm_smmu_device *smmu;
+ struct arm_smmu_master_data *master;
+ struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+ struct iommu_group *group;
+
+ if (!fwspec || fwspec->ops != &arm_smmu_ops)
+ return -ENODEV;
+ /*
+ * We _can_ actually withstand dodgy bus code re-calling add_device()
+ * without an intervening remove_device()/of_xlate() sequence, but
+ * we're not going to do so quietly...
+ */
+ if (WARN_ON_ONCE(fwspec->iommu_priv)) {
+ master = fwspec->iommu_priv;
+ smmu = master->smmu;
+ } else {
+ smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
+ if (!smmu)
+ return -ENODEV;
+ master = kzalloc(sizeof(*master), GFP_KERNEL);
+ if (!master)
+ return -ENOMEM;
+
+ master->smmu = smmu;
+ fwspec->iommu_priv = master;
+ }
+
+ /* Check the SIDs are in range of the SMMU and our stream table */
+ for (i = 0; i < fwspec->num_ids; i++) {
+ u32 sid = fwspec->ids[i];
+
+ if (!arm_smmu_sid_in_range(smmu, sid))
+ return -ERANGE;
+
+ /* Ensure l2 strtab is initialised */
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+ ret = arm_smmu_init_l2_strtab(smmu, sid);
+ if (ret)
+ return ret;
+ }
+ }
+
+ group = iommu_group_get_for_dev(dev);
+ if (!IS_ERR(group)) {
+ iommu_group_put(group);
+ iommu_device_link(&smmu->iommu, dev);
+ }
+
+ return PTR_ERR_OR_ZERO(group);
+}
+
+static void arm_smmu_remove_device(struct device *dev)
+{
+ struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+ struct arm_smmu_master_data *master;
+ struct arm_smmu_device *smmu;
+
+ if (!fwspec || fwspec->ops != &arm_smmu_ops)
+ return;
+
+ master = fwspec->iommu_priv;
+ smmu = master->smmu;
+ if (master && master->ste.assigned)
+ arm_smmu_detach_dev(dev);
+ iommu_group_remove_device(dev);
+ iommu_device_unlink(&smmu->iommu, dev);
+ kfree(master);
+ iommu_fwspec_free(dev);
+}
+
+static struct iommu_group *arm_smmu_device_group(struct device *dev)
+{
+ struct iommu_group *group;
+
+ /*
+ * We don't support devices sharing stream IDs other than PCI RID
+ * aliases, since the necessary ID-to-device lookup becomes rather
+ * impractical given a potential sparse 32-bit stream ID space.
+ */
+ if (dev_is_pci(dev))
+ group = pci_device_group(dev);
+ else
+ group = generic_device_group(dev);
+
+ return group;
+}
+
+static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
+ enum iommu_attr attr, void *data)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+ if (domain->type != IOMMU_DOMAIN_UNMANAGED)
+ return -EINVAL;
+
+ switch (attr) {
+ case DOMAIN_ATTR_NESTING:
+ *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
+ return 0;
+ default:
+ return -ENODEV;
+ }
+}
+
+static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
+ enum iommu_attr attr, void *data)
+{
+ int ret = 0;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+ if (domain->type != IOMMU_DOMAIN_UNMANAGED)
+ return -EINVAL;
+
+ mutex_lock(&smmu_domain->init_mutex);
+
+ switch (attr) {
+ case DOMAIN_ATTR_NESTING:
+ if (smmu_domain->smmu) {
+ ret = -EPERM;
+ goto out_unlock;
+ }
+
+ if (*(int *)data)
+ smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
+ else
+ smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+
+ break;
+ default:
+ ret = -ENODEV;
+ }
+
+out_unlock:
+ mutex_unlock(&smmu_domain->init_mutex);
+ return ret;
+}
+
+static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+ return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void arm_smmu_get_resv_regions(struct device *dev,
+ struct list_head *head)
+{
+ struct iommu_resv_region *region;
+ int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+ region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+ prot, IOMMU_RESV_SW_MSI);
+ if (!region)
+ return;
+
+ list_add_tail(®ion->list, head);
+
+ iommu_dma_get_resv_regions(dev, head);
+}
+
+static void arm_smmu_put_resv_regions(struct device *dev,
+ struct list_head *head)
+{
+ struct iommu_resv_region *entry, *next;
+
+ list_for_each_entry_safe(entry, next, head, list)
+ kfree(entry);
+}
+
+static struct iommu_ops arm_smmu_ops = {
+ .capable = arm_smmu_capable,
+ .domain_alloc = arm_smmu_domain_alloc,
+ .domain_free = arm_smmu_domain_free,
+ .attach_dev = arm_smmu_attach_dev,
+ .map = arm_smmu_map,
+ .unmap = arm_smmu_unmap,
+ .map_sg = default_iommu_map_sg,
+ .iova_to_phys = arm_smmu_iova_to_phys,
+ .add_device = arm_smmu_add_device,
+ .remove_device = arm_smmu_remove_device,
+ .device_group = arm_smmu_device_group,
+ .domain_get_attr = arm_smmu_domain_get_attr,
+ .domain_set_attr = arm_smmu_domain_set_attr,
+ .of_xlate = arm_smmu_of_xlate,
+ .get_resv_regions = arm_smmu_get_resv_regions,
+ .put_resv_regions = arm_smmu_put_resv_regions,
+ .pgsize_bitmap = -1UL, /* Restricted during device attach */
+};
+
+/* Probing and initialisation functions */
+static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
+ struct arm_smmu_queue *q,
+ unsigned long prod_off,
+ unsigned long cons_off,
+ size_t dwords)
+{
+ size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
+
+ q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
+ if (!q->base) {
+ dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
+ qsz);
+ return -ENOMEM;
+ }
+
+ q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
+ q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
+ q->ent_dwords = dwords;
+
+ q->q_base = Q_BASE_RWA;
+ q->q_base |= q->base_dma & Q_BASE_ADDR_MASK << Q_BASE_ADDR_SHIFT;
+ q->q_base |= (q->max_n_shift & Q_BASE_LOG2SIZE_MASK)
+ << Q_BASE_LOG2SIZE_SHIFT;
+
+ q->prod = q->cons = 0;
+ return 0;
+}
+
+static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
+{
+ int ret;
+
+ /* cmdq */
+ spin_lock_init(&smmu->cmdq.lock);
+ ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
+ ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS);
+ if (ret)
+ return ret;
+
+ /* evtq */
+ ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
+ ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS);
+ if (ret)
+ return ret;
+
+ /* priq */
+ if (!(smmu->features & ARM_SMMU_FEAT_PRI))
+ return 0;
+
+ return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
+ ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS);
+}
+
+static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
+{
+ unsigned int i;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+ size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
+ void *strtab = smmu->strtab_cfg.strtab;
+
+ cfg->l1_desc = devm_kzalloc(smmu->dev, size, GFP_KERNEL);
+ if (!cfg->l1_desc) {
+ dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < cfg->num_l1_ents; ++i) {
+ arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
+ strtab += STRTAB_L1_DESC_DWORDS << 3;
+ }
+
+ return 0;
+}
+
+static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
+{
+ void *strtab;
+ u64 reg;
+ u32 size, l1size;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ /* Calculate the L1 size, capped to the SIDSIZE. */
+ size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
+ size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+ cfg->num_l1_ents = 1 << size;
+
+ size += STRTAB_SPLIT;
+ if (size < smmu->sid_bits)
+ dev_warn(smmu->dev,
+ "2-level strtab only covers %u/%u bits of SID\n",
+ size, smmu->sid_bits);
+
+ l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
+ strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!strtab) {
+ dev_err(smmu->dev,
+ "failed to allocate l1 stream table (%u bytes)\n",
+ size);
+ return -ENOMEM;
+ }
+ cfg->strtab = strtab;
+
+ /* Configure strtab_base_cfg for 2 levels */
+ reg = STRTAB_BASE_CFG_FMT_2LVL;
+ reg |= (size & STRTAB_BASE_CFG_LOG2SIZE_MASK)
+ << STRTAB_BASE_CFG_LOG2SIZE_SHIFT;
+ reg |= (STRTAB_SPLIT & STRTAB_BASE_CFG_SPLIT_MASK)
+ << STRTAB_BASE_CFG_SPLIT_SHIFT;
+ cfg->strtab_base_cfg = reg;
+
+ return arm_smmu_init_l1_strtab(smmu);
+}
+
+static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
+{
+ void *strtab;
+ u64 reg;
+ u32 size;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
+ strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!strtab) {
+ dev_err(smmu->dev,
+ "failed to allocate linear stream table (%u bytes)\n",
+ size);
+ return -ENOMEM;
+ }
+ cfg->strtab = strtab;
+ cfg->num_l1_ents = 1 << smmu->sid_bits;
+
+ /* Configure strtab_base_cfg for a linear table covering all SIDs */
+ reg = STRTAB_BASE_CFG_FMT_LINEAR;
+ reg |= (smmu->sid_bits & STRTAB_BASE_CFG_LOG2SIZE_MASK)
+ << STRTAB_BASE_CFG_LOG2SIZE_SHIFT;
+ cfg->strtab_base_cfg = reg;
+
+ arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
+ return 0;
+}
+
+static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
+{
+ u64 reg;
+ int ret;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+ ret = arm_smmu_init_strtab_2lvl(smmu);
+ else
+ ret = arm_smmu_init_strtab_linear(smmu);
+
+ if (ret)
+ return ret;
+
+ /* Set the strtab base address */
+ reg = smmu->strtab_cfg.strtab_dma &
+ STRTAB_BASE_ADDR_MASK << STRTAB_BASE_ADDR_SHIFT;
+ reg |= STRTAB_BASE_RA;
+ smmu->strtab_cfg.strtab_base = reg;
+
+ /* Allocate the first VMID for stage-2 bypass STEs */
+ set_bit(0, smmu->vmid_map);
+ return 0;
+}
+
+static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
+{
+ int ret;
+
+ ret = arm_smmu_init_queues(smmu);
+ if (ret)
+ return ret;
+
+ return arm_smmu_init_strtab(smmu);
+}
+
+static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
+ unsigned int reg_off, unsigned int ack_off)
+{
+ u32 reg;
+
+ writel_relaxed(val, smmu->base + reg_off);
+ return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
+ 1, ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+/* GBPA is "special" */
+static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+{
+ int ret;
+ u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
+
+ ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+ 1, ARM_SMMU_POLL_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ reg &= ~clr;
+ reg |= set;
+ writel_relaxed(reg | GBPA_UPDATE, gbpa);
+ return readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+ 1, ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+static void arm_smmu_free_msis(void *data)
+{
+ struct device *dev = data;
+ platform_msi_domain_free_irqs(dev);
+}
+
+static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
+{
+ phys_addr_t doorbell;
+ struct device *dev = msi_desc_to_dev(desc);
+ struct arm_smmu_device *smmu = dev_get_drvdata(dev);
+ phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
+
+ doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
+ doorbell &= MSI_CFG0_ADDR_MASK << MSI_CFG0_ADDR_SHIFT;
+
+ writeq_relaxed(doorbell, smmu->base + cfg[0]);
+ writel_relaxed(msg->data, smmu->base + cfg[1]);
+ writel_relaxed(MSI_CFG2_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
+}
+
+static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
+{
+ struct msi_desc *desc;
+ int ret, nvec = ARM_SMMU_MAX_MSIS;
+ struct device *dev = smmu->dev;
+
+ /* Clear the MSI address regs */
+ writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
+ writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
+
+ if (smmu->features & ARM_SMMU_FEAT_PRI)
+ writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
+ else
+ nvec--;
+
+ if (!(smmu->features & ARM_SMMU_FEAT_MSI))
+ return;
+
+ /* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
+ ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
+ if (ret) {
+ dev_warn(dev, "failed to allocate MSIs\n");
+ return;
+ }
+
+ for_each_msi_entry(desc, dev) {
+ switch (desc->platform.msi_index) {
+ case EVTQ_MSI_INDEX:
+ smmu->evtq.q.irq = desc->irq;
+ break;
+ case GERROR_MSI_INDEX:
+ smmu->gerr_irq = desc->irq;
+ break;
+ case PRIQ_MSI_INDEX:
+ smmu->priq.q.irq = desc->irq;
+ break;
+ default: /* Unknown */
+ continue;
+ }
+ }
+
+ /* Add callback to free MSIs on teardown */
+ devm_add_action(dev, arm_smmu_free_msis, dev);
+}
+
+static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
+{
+ int irq, ret;
+
+ arm_smmu_setup_msis(smmu);
+
+ /* Request interrupt lines */
+ irq = smmu->evtq.q.irq;
+ if (irq) {
+ ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
+ arm_smmu_evtq_thread,
+ IRQF_ONESHOT,
+ "arm-smmu-v3-evtq", smmu);
+ if (ret < 0)
+ dev_warn(smmu->dev, "failed to enable evtq irq\n");
+ }
+
+ irq = smmu->cmdq.q.irq;
+ if (irq) {
+ ret = devm_request_irq(smmu->dev, irq,
+ arm_smmu_cmdq_sync_handler, 0,
+ "arm-smmu-v3-cmdq-sync", smmu);
+ if (ret < 0)
+ dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
+ }
+
+ irq = smmu->gerr_irq;
+ if (irq) {
+ ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
+ 0, "arm-smmu-v3-gerror", smmu);
+ if (ret < 0)
+ dev_warn(smmu->dev, "failed to enable gerror irq\n");
+ }
+
+ if (smmu->features & ARM_SMMU_FEAT_PRI) {
+ irq = smmu->priq.q.irq;
+ if (irq) {
+ ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
+ arm_smmu_priq_thread,
+ IRQF_ONESHOT,
+ "arm-smmu-v3-priq",
+ smmu);
+ if (ret < 0)
+ dev_warn(smmu->dev,
+ "failed to enable priq irq\n");
+ }
+ }
+}
+
+static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
+{
+ int ret, irq;
+ u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
+
+ /* Disable IRQs first */
+ ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
+ ARM_SMMU_IRQ_CTRLACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to disable irqs\n");
+ return ret;
+ }
+
+ irq = smmu->combined_irq;
+ if (irq) {
+ /*
+ * Cavium ThunderX2 implementation doesn't not support unique
+ * irq lines. Use single irq line for all the SMMUv3 interrupts.
+ */
+ ret = devm_request_threaded_irq(smmu->dev, irq,
+ arm_smmu_combined_irq_handler,
+ arm_smmu_combined_irq_thread,
+ IRQF_ONESHOT,
+ "arm-smmu-v3-combined-irq", smmu);
+ if (ret < 0)
+ dev_warn(smmu->dev, "failed to enable combined irq\n");
+ } else
+ arm_smmu_setup_unique_irqs(smmu);
+
+ if (smmu->features & ARM_SMMU_FEAT_PRI)
+ irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
+
+ /* Enable interrupt generation on the SMMU */
+ ret = arm_smmu_write_reg_sync(smmu, irqen_flags,
+ ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
+ if (ret)
+ dev_warn(smmu->dev, "failed to enable irqs\n");
+
+ return 0;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
+{
+ int ret;
+
+ ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
+ if (ret)
+ dev_err(smmu->dev, "failed to clear cr0\n");
+
+ return ret;
+}
+
+static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+{
+ int ret;
+ u32 reg, enables;
+ struct arm_smmu_cmdq_ent cmd;
+
+ /* Clear CR0 and sync (disables SMMU and queue processing) */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+ if (reg & CR0_SMMUEN)
+ dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
+
+ ret = arm_smmu_device_disable(smmu);
+ if (ret)
+ return ret;
+
+ /* CR1 (table and queue memory attributes) */
+ reg = (CR1_SH_ISH << CR1_TABLE_SH_SHIFT) |
+ (CR1_CACHE_WB << CR1_TABLE_OC_SHIFT) |
+ (CR1_CACHE_WB << CR1_TABLE_IC_SHIFT) |
+ (CR1_SH_ISH << CR1_QUEUE_SH_SHIFT) |
+ (CR1_CACHE_WB << CR1_QUEUE_OC_SHIFT) |
+ (CR1_CACHE_WB << CR1_QUEUE_IC_SHIFT);
+ writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
+
+ /* CR2 (random crap) */
+ reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
+ writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
+
+ /* Stream table */
+ writeq_relaxed(smmu->strtab_cfg.strtab_base,
+ smmu->base + ARM_SMMU_STRTAB_BASE);
+ writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
+ smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
+
+ /* Command queue */
+ writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
+ writel_relaxed(smmu->cmdq.q.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
+ writel_relaxed(smmu->cmdq.q.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
+
+ enables = CR0_CMDQEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable command queue\n");
+ return ret;
+ }
+
+ /* Invalidate any cached configuration */
+ cmd.opcode = CMDQ_OP_CFGI_ALL;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+
+ /* Invalidate any stale TLB entries */
+ if (smmu->features & ARM_SMMU_FEAT_HYP) {
+ cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ }
+
+ cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+
+ /* Event queue */
+ writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
+ writel_relaxed(smmu->evtq.q.prod,
+ arm_smmu_page1_fixup(ARM_SMMU_EVTQ_PROD, smmu));
+ writel_relaxed(smmu->evtq.q.cons,
+ arm_smmu_page1_fixup(ARM_SMMU_EVTQ_CONS, smmu));
+
+ enables |= CR0_EVTQEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable event queue\n");
+ return ret;
+ }
+
+ /* PRI queue */
+ if (smmu->features & ARM_SMMU_FEAT_PRI) {
+ writeq_relaxed(smmu->priq.q.q_base,
+ smmu->base + ARM_SMMU_PRIQ_BASE);
+ writel_relaxed(smmu->priq.q.prod,
+ arm_smmu_page1_fixup(ARM_SMMU_PRIQ_PROD, smmu));
+ writel_relaxed(smmu->priq.q.cons,
+ arm_smmu_page1_fixup(ARM_SMMU_PRIQ_CONS, smmu));
+
+ enables |= CR0_PRIQEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable PRI queue\n");
+ return ret;
+ }
+ }
+
+ ret = arm_smmu_setup_irqs(smmu);
+ if (ret) {
+ dev_err(smmu->dev, "failed to setup irqs\n");
+ return ret;
+ }
+
+
+ /* Enable the SMMU interface, or ensure bypass */
+ if (!bypass || disable_bypass) {
+ enables |= CR0_SMMUEN;
+ } else {
+ ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+ if (ret) {
+ dev_err(smmu->dev, "GBPA not responding to update\n");
+ return ret;
+ }
+ }
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable SMMU interface\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
+{
+ u32 reg;
+ bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
+
+ /* IDR0 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
+
+ /* 2-level structures */
+ if ((reg & IDR0_ST_LVL_MASK << IDR0_ST_LVL_SHIFT) == IDR0_ST_LVL_2LVL)
+ smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+ if (reg & IDR0_CD2L)
+ smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB;
+
+ /*
+ * Translation table endianness.
+ * We currently require the same endianness as the CPU, but this
+ * could be changed later by adding a new IO_PGTABLE_QUIRK.
+ */
+ switch (reg & IDR0_TTENDIAN_MASK << IDR0_TTENDIAN_SHIFT) {
+ case IDR0_TTENDIAN_MIXED:
+ smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE;
+ break;
+#ifdef __BIG_ENDIAN
+ case IDR0_TTENDIAN_BE:
+ smmu->features |= ARM_SMMU_FEAT_TT_BE;
+ break;
+#else
+ case IDR0_TTENDIAN_LE:
+ smmu->features |= ARM_SMMU_FEAT_TT_LE;
+ break;
+#endif
+ default:
+ dev_err(smmu->dev, "unknown/unsupported TT endianness!\n");
+ return -ENXIO;
+ }
+
+ /* Boolean feature flags */
+ if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
+ smmu->features |= ARM_SMMU_FEAT_PRI;
+
+ if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
+ smmu->features |= ARM_SMMU_FEAT_ATS;
+
+ if (reg & IDR0_SEV)
+ smmu->features |= ARM_SMMU_FEAT_SEV;
+
+ if (reg & IDR0_MSI)
+ smmu->features |= ARM_SMMU_FEAT_MSI;
+
+ if (reg & IDR0_HYP)
+ smmu->features |= ARM_SMMU_FEAT_HYP;
+
+ /*
+ * The coherency feature as set by FW is used in preference to the ID
+ * register, but warn on mismatch.
+ */
+ if (!!(reg & IDR0_COHACC) != coherent)
+ dev_warn(smmu->dev, "IDR0.COHACC overridden by dma-coherent property (%s)\n",
+ coherent ? "true" : "false");
+
+ switch (reg & IDR0_STALL_MODEL_MASK << IDR0_STALL_MODEL_SHIFT) {
+ case IDR0_STALL_MODEL_STALL:
+ /* Fallthrough */
+ case IDR0_STALL_MODEL_FORCE:
+ smmu->features |= ARM_SMMU_FEAT_STALLS;
+ }
+
+ if (reg & IDR0_S1P)
+ smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
+
+ if (reg & IDR0_S2P)
+ smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
+
+ if (!(reg & (IDR0_S1P | IDR0_S2P))) {
+ dev_err(smmu->dev, "no translation support!\n");
+ return -ENXIO;
+ }
+
+ /* We only support the AArch64 table format at present */
+ switch (reg & IDR0_TTF_MASK << IDR0_TTF_SHIFT) {
+ case IDR0_TTF_AARCH32_64:
+ smmu->ias = 40;
+ /* Fallthrough */
+ case IDR0_TTF_AARCH64:
+ break;
+ default:
+ dev_err(smmu->dev, "AArch64 table format not supported!\n");
+ return -ENXIO;
+ }
+
+ /* ASID/VMID sizes */
+ smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
+ smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
+
+ /* IDR1 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
+ if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) {
+ dev_err(smmu->dev, "embedded implementation not supported\n");
+ return -ENXIO;
+ }
+
+ /* Queue sizes, capped at 4k */
+ smmu->cmdq.q.max_n_shift = min((u32)CMDQ_MAX_SZ_SHIFT,
+ reg >> IDR1_CMDQ_SHIFT & IDR1_CMDQ_MASK);
+ if (!smmu->cmdq.q.max_n_shift) {
+ /* Odd alignment restrictions on the base, so ignore for now */
+ dev_err(smmu->dev, "unit-length command queue not supported\n");
+ return -ENXIO;
+ }
+
+ smmu->evtq.q.max_n_shift = min((u32)EVTQ_MAX_SZ_SHIFT,
+ reg >> IDR1_EVTQ_SHIFT & IDR1_EVTQ_MASK);
+ smmu->priq.q.max_n_shift = min((u32)PRIQ_MAX_SZ_SHIFT,
+ reg >> IDR1_PRIQ_SHIFT & IDR1_PRIQ_MASK);
+
+ /* SID/SSID sizes */
+ smmu->ssid_bits = reg >> IDR1_SSID_SHIFT & IDR1_SSID_MASK;
+ smmu->sid_bits = reg >> IDR1_SID_SHIFT & IDR1_SID_MASK;
+
+ /*
+ * If the SMMU supports fewer bits than would fill a single L2 stream
+ * table, use a linear table instead.
+ */
+ if (smmu->sid_bits <= STRTAB_SPLIT)
+ smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+ /* IDR5 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
+
+ /* Maximum number of outstanding stalls */
+ smmu->evtq.max_stalls = reg >> IDR5_STALL_MAX_SHIFT
+ & IDR5_STALL_MAX_MASK;
+
+ /* Page sizes */
+ if (reg & IDR5_GRAN64K)
+ smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
+ if (reg & IDR5_GRAN16K)
+ smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
+ if (reg & IDR5_GRAN4K)
+ smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+
+ if (arm_smmu_ops.pgsize_bitmap == -1UL)
+ arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
+ else
+ arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
+
+ /* Output address size */
+ switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
+ case IDR5_OAS_32_BIT:
+ smmu->oas = 32;
+ break;
+ case IDR5_OAS_36_BIT:
+ smmu->oas = 36;
+ break;
+ case IDR5_OAS_40_BIT:
+ smmu->oas = 40;
+ break;
+ case IDR5_OAS_42_BIT:
+ smmu->oas = 42;
+ break;
+ case IDR5_OAS_44_BIT:
+ smmu->oas = 44;
+ break;
+ default:
+ dev_info(smmu->dev,
+ "unknown output address size. Truncating to 48-bit\n");
+ /* Fallthrough */
+ case IDR5_OAS_48_BIT:
+ smmu->oas = 48;
+ }
+
+ /* Set the DMA mask for our table walker */
+ if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
+ dev_warn(smmu->dev,
+ "failed to set DMA mask for table walker\n");
+
+ smmu->ias = max(smmu->ias, smmu->oas);
+
+ dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
+ smmu->ias, smmu->oas, smmu->features);
+ return 0;
+}
+
+#ifdef CONFIG_ACPI
+static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu)
+{
+ switch (model) {
+ case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX:
+ smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY;
+ break;
+ case ACPI_IORT_SMMU_HISILICON_HI161X:
+ smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH;
+ break;
+ }
+
+ dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options);
+}
+
+static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
+ struct arm_smmu_device *smmu)
+{
+ struct acpi_iort_smmu_v3 *iort_smmu;
+ struct device *dev = smmu->dev;
+ struct acpi_iort_node *node;
+
+ node = *(struct acpi_iort_node **)dev_get_platdata(dev);
+
+ /* Retrieve SMMUv3 specific data */
+ iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
+
+ acpi_smmu_get_options(iort_smmu->model, smmu);
+
+ if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE)
+ smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+ return 0;
+}
+#else
+static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
+ struct arm_smmu_device *smmu)
+{
+ return -ENODEV;
+}
+#endif
+
+static int arm_smmu_device_dt_probe(struct platform_device *pdev,
+ struct arm_smmu_device *smmu)
+{
+ struct device *dev = &pdev->dev;
+ u32 cells;
+ int ret = -EINVAL;
+
+ if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
+ dev_err(dev, "missing #iommu-cells property\n");
+ else if (cells != 1)
+ dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
+ else
+ ret = 0;
+
+ parse_driver_options(smmu);
+
+ if (of_dma_is_coherent(dev->of_node))
+ smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+ return ret;
+}
+
+static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
+{
+ if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY)
+ return SZ_64K;
+ else
+ return SZ_128K;
+}
+
+static int arm_smmu_device_probe(struct platform_device *pdev)
+{
+ int irq, ret;
+ struct resource *res;
+ resource_size_t ioaddr;
+ struct arm_smmu_device *smmu;
+ struct device *dev = &pdev->dev;
+ bool bypass;
+
+ smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
+ if (!smmu) {
+ dev_err(dev, "failed to allocate arm_smmu_device\n");
+ return -ENOMEM;
+ }
+ smmu->dev = dev;
+
+ if (dev->of_node) {
+ ret = arm_smmu_device_dt_probe(pdev, smmu);
+ } else {
+ ret = arm_smmu_device_acpi_probe(pdev, smmu);
+ if (ret == -ENODEV)
+ return ret;
+ }
+
+ /* Set bypass mode according to firmware probing result */
+ bypass = !!ret;
+
+ /* Base address */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (resource_size(res) + 1 < arm_smmu_resource_size(smmu)) {
+ dev_err(dev, "MMIO region too small (%pr)\n", res);
+ return -EINVAL;
+ }
+ ioaddr = res->start;
+
+ smmu->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(smmu->base))
+ return PTR_ERR(smmu->base);
+
+ /* Interrupt lines */
+
+ irq = platform_get_irq_byname(pdev, "combined");
+ if (irq > 0)
+ smmu->combined_irq = irq;
+ else {
+ irq = platform_get_irq_byname(pdev, "eventq");
+ if (irq > 0)
+ smmu->evtq.q.irq = irq;
+
+ irq = platform_get_irq_byname(pdev, "priq");
+ if (irq > 0)
+ smmu->priq.q.irq = irq;
+
+ irq = platform_get_irq_byname(pdev, "cmdq-sync");
+ if (irq > 0)
+ smmu->cmdq.q.irq = irq;
+
+ irq = platform_get_irq_byname(pdev, "gerror");
+ if (irq > 0)
+ smmu->gerr_irq = irq;
+ }
+ /* Probe the h/w */
+ ret = arm_smmu_device_hw_probe(smmu);
+ if (ret)
+ return ret;
+
+ /* Initialise in-memory data structures */
+ ret = arm_smmu_init_structures(smmu);
+ if (ret)
+ return ret;
+
+ /* Record our private device structure */
+ platform_set_drvdata(pdev, smmu);
+
+ /* Reset the device */
+ ret = arm_smmu_device_reset(smmu, bypass);
+ if (ret)
+ return ret;
+
+ /* And we're up. Go go go! */
+ ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
+ "smmu3.%pa", &ioaddr);
+ if (ret)
+ return ret;
+
+ iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
+ iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
+
+ ret = iommu_device_register(&smmu->iommu);
+ if (ret) {
+ dev_err(dev, "Failed to register iommu\n");
+ return ret;
+ }
+
+#ifdef CONFIG_PCI
+ if (pci_bus_type.iommu_ops != &arm_smmu_ops) {
+ pci_request_acs();
+ ret = bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
+ if (ret)
+ return ret;
+ }
+#endif
+#ifdef CONFIG_ARM_AMBA
+ if (amba_bustype.iommu_ops != &arm_smmu_ops) {
+ ret = bus_set_iommu(&amba_bustype, &arm_smmu_ops);
+ if (ret)
+ return ret;
+ }
+#endif
+ if (platform_bus_type.iommu_ops != &arm_smmu_ops) {
+ ret = bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static int arm_smmu_device_remove(struct platform_device *pdev)
+{
+ struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
+
+ arm_smmu_device_disable(smmu);
+
+ return 0;
+}
+
+static void arm_smmu_device_shutdown(struct platform_device *pdev)
+{
+ arm_smmu_device_remove(pdev);
+}
+
+static const struct of_device_id arm_smmu_of_match[] = {
+ { .compatible = "arm,smmu-v3", },
+ { },
+};
+MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
+
+static struct platform_driver arm_smmu_driver = {
+ .driver = {
+ .name = "arm-smmu-v3",
+ .of_match_table = of_match_ptr(arm_smmu_of_match),
+ },
+ .probe = arm_smmu_device_probe,
+ .remove = arm_smmu_device_remove,
+ .shutdown = arm_smmu_device_shutdown,
+};
+module_platform_driver(arm_smmu_driver);
+
+IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
+
+MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
+MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
+MODULE_LICENSE("GPL v2");
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 3:59 [RFC v3 0/4] SMMUv3 Driver Sameer Goel
` (2 preceding siblings ...)
2017-12-05 3:59 ` [RFC v3 3/4] Add verbatim copy of arm-smmu-v3.c from Linux Sameer Goel
@ 2017-12-05 3:59 ` Sameer Goel
2017-12-05 14:17 ` Julien Grall
2017-12-12 8:09 ` Manish Jaggi
3 siblings, 2 replies; 19+ messages in thread
From: Sameer Goel @ 2017-12-05 3:59 UTC (permalink / raw)
To: xen-devel, julien.grall, mjaggi; +Cc: Sameer Goel, sstabellini, shankerd
This driver follows an approach similar to smmu driver. The intent here
is to reuse as much Linux code as possible.
- Glue code has been introduced in headers to bridge the API calls.
- Called Linux functions from the Xen IOMMU function calls.
- Xen modifications are preceded by /*Xen: comment */
- New config items for SMMUv3 and legacy SMMU have been defined.
Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
---
xen/drivers/Kconfig | 2 +
xen/drivers/passthrough/arm/Kconfig | 14 +
xen/drivers/passthrough/arm/Makefile | 3 +-
xen/drivers/passthrough/arm/arm_smmu.h | 189 ++++++++++
xen/drivers/passthrough/arm/smmu-v3.c | 619 ++++++++++++++++++++++++++++++---
5 files changed, 768 insertions(+), 59 deletions(-)
create mode 100644 xen/drivers/passthrough/arm/Kconfig
create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
index bc3a54f..6126553 100644
--- a/xen/drivers/Kconfig
+++ b/xen/drivers/Kconfig
@@ -12,4 +12,6 @@ source "drivers/pci/Kconfig"
source "drivers/video/Kconfig"
+source "drivers/passthrough/arm/Kconfig"
+
endmenu
diff --git a/xen/drivers/passthrough/arm/Kconfig b/xen/drivers/passthrough/arm/Kconfig
new file mode 100644
index 0000000..9ac4cea
--- /dev/null
+++ b/xen/drivers/passthrough/arm/Kconfig
@@ -0,0 +1,14 @@
+
+config ARM_SMMU
+ bool "ARM SMMU v1/2 support"
+ depends on ARM_64
+ help
+ Support for implementations of the ARM System MMU architecture. (1/2)
+
+config ARM_SMMU_v3
+ bool "ARM SMMUv3 Support"
+ depends on ARM_64
+ help
+ Support for implementations of the ARM System MMU architecture
+ version 3.
+
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index f4cd26e..5b3eb15 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1,2 +1,3 @@
obj-y += iommu.o
-obj-y += smmu.o
+obj-$(CONFIG_ARM_SMMU) += smmu.o
+obj-$(CONFIG_ARM_SMMU_v3) += smmu-v3.o
diff --git a/xen/drivers/passthrough/arm/arm_smmu.h b/xen/drivers/passthrough/arm/arm_smmu.h
new file mode 100644
index 0000000..b5e161f
--- /dev/null
+++ b/xen/drivers/passthrough/arm/arm_smmu.h
@@ -0,0 +1,189 @@
+/******************************************************************************
+ * ./arm_smmu.h
+ *
+ * Common compatibility defines and data_structures for porting arm smmu
+ * drivers from Linux.
+ *
+ * Copyright (c) 2017 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_SMMU_H__
+#define __ARM_SMMU_H__
+
+/* Xen: Helpers to get device MMIO and IRQs */
+struct resource {
+ u64 addr;
+ u64 size;
+ unsigned int type;
+};
+
+#define resource_size(res) ((res)->size)
+
+#define platform_device device
+
+#define IORESOURCE_MEM 0
+#define IORESOURCE_IRQ 1
+
+static struct resource *platform_get_resource(struct platform_device *pdev,
+ unsigned int type,
+ unsigned int num)
+{
+ /*
+ * The resource is only used between 2 calls of platform_get_resource.
+ * It's quite ugly but it's avoid to add too much code in the part
+ * imported from Linux
+ */
+ static struct resource res;
+ struct acpi_iort_node *iort_node;
+ struct acpi_iort_smmu_v3 *node_smmu_data;
+ int ret = 0;
+
+ res.type = type;
+
+ switch (type) {
+ case IORESOURCE_MEM:
+ if (pdev->type == DEV_ACPI) {
+ ret = 1;
+ iort_node = pdev->acpi_node;
+ node_smmu_data =
+ (struct acpi_iort_smmu_v3 *)iort_node->node_data;
+
+ if (node_smmu_data != NULL) {
+ res.addr = node_smmu_data->base_address;
+ res.size = SZ_128K;
+ ret = 0;
+ }
+ } else {
+ ret = dt_device_get_address(dev_to_dt(pdev), num,
+ &res.addr, &res.size);
+ }
+
+ return ((ret) ? NULL : &res);
+
+ case IORESOURCE_IRQ:
+ /* ACPI case not implemented as there is no use case for it */
+ ret = platform_get_irq(dev_to_dt(pdev), num);
+
+ if (ret < 0)
+ return NULL;
+
+ res.addr = ret;
+ res.size = 1;
+
+ return &res;
+
+ default:
+ return NULL;
+ }
+}
+
+static int platform_get_irq_byname(struct platform_device *pdev, const char *name)
+{
+ const struct dt_property *dtprop;
+ struct acpi_iort_node *iort_node;
+ struct acpi_iort_smmu_v3 *node_smmu_data;
+ int ret = 0;
+
+ if (pdev->type == DEV_ACPI) {
+ iort_node = pdev->acpi_node;
+ node_smmu_data = (struct acpi_iort_smmu_v3 *)iort_node->node_data;
+
+ if (node_smmu_data != NULL) {
+ if (!strcmp(name, "eventq"))
+ ret = node_smmu_data->event_gsiv;
+ else if (!strcmp(name, "priq"))
+ ret = node_smmu_data->pri_gsiv;
+ else if (!strcmp(name, "cmdq-sync"))
+ ret = node_smmu_data->sync_gsiv;
+ else if (!strcmp(name, "gerror"))
+ ret = node_smmu_data->gerr_gsiv;
+ else
+ ret = -EINVAL;
+ }
+ } else {
+ dtprop = dt_find_property(dev_to_dt(pdev), "interrupt-names", NULL);
+ if (!dtprop)
+ return -EINVAL;
+
+ if (!dtprop->value)
+ return -ENODATA;
+ }
+
+ return ret;
+}
+
+/* Xen: Stub out DMA domain related functions */
+#define iommu_get_dma_cookie(dom) 0
+#define iommu_put_dma_cookie(dom) 0
+
+static void __iomem *devm_ioremap_resource(struct device *dev,
+ struct resource *res)
+{
+ void __iomem *ptr;
+
+ if (!res || res->type != IORESOURCE_MEM) {
+ dev_err(dev, "Invalid resource\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ ptr = ioremap_nocache(res->addr, res->size);
+ if (!ptr) {
+ dev_err(dev,
+ "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
+ res->addr, res->size);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ return ptr;
+}
+
+/* Xen: Dummy iommu_domain */
+struct iommu_domain {
+ /* Runtime SMMU configuration for this iommu_domain */
+ struct arm_smmu_domain *priv;
+ unsigned int type;
+
+ atomic_t ref;
+ /* Used to link iommu_domain contexts for a same domain.
+ * There is at least one per-SMMU to used by the domain.
+ */
+ struct list_head list;
+};
+/* Xen: Domain type definitions. Not really needed for Xen, defining to port
+ * Linux code as-is
+ */
+#define IOMMU_DOMAIN_UNMANAGED 0
+#define IOMMU_DOMAIN_DMA 1
+#define IOMMU_DOMAIN_IDENTITY 2
+
+/* Xen: Describes information required for a Xen domain */
+struct arm_smmu_xen_domain {
+ spinlock_t lock;
+ /* List of iommu domains associated to this domain */
+ struct list_head iommu_domains;
+};
+
+/*
+ * Xen: Information about each device stored in dev->archdata.iommu
+ *
+ * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
+ * the SMMU).
+ */
+struct arm_smmu_xen_device {
+ struct iommu_domain *domain;
+};
+
+#endif /* __ARM_SMMU_H__ */
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index e67ba6c..c6c1b99 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -18,28 +18,38 @@
* Author: Will Deacon <will.deacon@arm.com>
*
* This driver is powered by bad coffee and bombay mix.
+ *
+ *
+ * Based on Linux drivers/iommu/arm-smmu-v3.c
+ * => commit 7aa8619a66aea52b145e04cbab4f8d6a4e5f3f3b
+ *
+ * Xen modifications:
+ * Sameer Goel <sameer.goel@linaro.org>
+ * Copyright (C) 2017, The Linux Foundation, All rights reserved.
+ *
*/
-#include <linux/acpi.h>
-#include <linux/acpi_iort.h>
-#include <linux/delay.h>
-#include <linux/dma-iommu.h>
-#include <linux/err.h>
-#include <linux/interrupt.h>
-#include <linux/iommu.h>
-#include <linux/iopoll.h>
-#include <linux/module.h>
-#include <linux/msi.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_iommu.h>
-#include <linux/of_platform.h>
-#include <linux/pci.h>
-#include <linux/platform_device.h>
-
-#include <linux/amba/bus.h>
-
-#include "io-pgtable.h"
+#include <xen/acpi.h>
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/err.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/linux_compat.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
+#include <xen/vmap.h>
+#include <acpi/acpi_iort.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+
+#include "arm_smmu.h" /* Not a self contained header. So last in the list */
/* MMIO registers */
#define ARM_SMMU_IDR0 0x0
@@ -423,9 +433,12 @@
#endif
static bool disable_bypass;
+
+#if 0 /* Xen: Not applicable for Xen */
module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
MODULE_PARM_DESC(disable_bypass,
"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
+#endif
enum pri_resp {
PRI_RESP_DENY,
@@ -433,6 +446,7 @@ enum pri_resp {
PRI_RESP_SUCC,
};
+#if 0 /* Xen: No MSI support in this iteration */
enum arm_smmu_msi_index {
EVTQ_MSI_INDEX,
GERROR_MSI_INDEX,
@@ -457,6 +471,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
ARM_SMMU_PRIQ_IRQ_CFG2,
},
};
+#endif
struct arm_smmu_cmdq_ent {
/* Common fields */
@@ -561,6 +576,8 @@ struct arm_smmu_s2_cfg {
u16 vmid;
u64 vttbr;
u64 vtcr;
+ /* Xen: Domain associated to this configuration */
+ struct domain *domain;
};
struct arm_smmu_strtab_ent {
@@ -635,9 +652,21 @@ struct arm_smmu_device {
struct arm_smmu_strtab_cfg strtab_cfg;
/* IOMMU core code handle */
+#if 0 /*Xen: Generic iommu_device ref not needed here */
struct iommu_device iommu;
+#endif
+ /* Xen: Need to keep a list of SMMU devices */
+ struct list_head devices;
};
+/* Xen: Keep a list of devices associated with this driver */
+static DEFINE_SPINLOCK(arm_smmu_devices_lock);
+static LIST_HEAD(arm_smmu_devices);
+/* Xen: Helper for finding a device using fwnode */
+static
+struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode);
+
+
/* SMMU private data for each master */
struct arm_smmu_master_data {
struct arm_smmu_device *smmu;
@@ -654,7 +683,7 @@ enum arm_smmu_domain_stage {
struct arm_smmu_domain {
struct arm_smmu_device *smmu;
- struct mutex init_mutex; /* Protects smmu pointer */
+ mutex init_mutex; /* Protects smmu pointer */
struct io_pgtable_ops *pgtbl_ops;
@@ -961,6 +990,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
}
+#if 0 /*Xen: Comment out functions that set up S1 translations */
/* Context descriptor manipulation functions */
static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
{
@@ -1003,6 +1033,7 @@ static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
}
+#endif
/* Stream table manipulation functions */
static void
@@ -1164,6 +1195,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
void *strtab;
struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
+ u32 alignment = 0;
if (desc->l2ptr)
return 0;
@@ -1172,14 +1204,16 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
desc->span = STRTAB_SPLIT + 1;
- desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
- GFP_KERNEL | __GFP_ZERO);
+ /* Alignment picked from ARM SMMU arch version 3.x. L1ST.L2Ptr */
+ alignment = 1 << ((5 + (desc->span - 1)));
+ desc->l2ptr = _xzalloc(size, alignment);
if (!desc->l2ptr) {
dev_err(smmu->dev,
"failed to allocate l2 stream table for SID %u\n",
sid);
return -ENOMEM;
}
+ desc->l2ptr_dma = virt_to_maddr(desc->l2ptr);
arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
arm_smmu_write_strtab_l1_desc(strtab, desc);
@@ -1232,7 +1266,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
dev_info(smmu->dev, "unexpected PRI request received:\n");
dev_info(smmu->dev,
- "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
+ "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova %#" PRIx64 "\n",
sid, ssid, grpid, last ? "L" : "",
evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
evt[0] & PRIQ_0_PERM_READ ? "R" : "",
@@ -1346,6 +1380,8 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
{
arm_smmu_gerror_handler(irq, dev);
arm_smmu_cmdq_sync_handler(irq, dev);
+ /*Xen: No threaded irq. So call the required function from here */
+ arm_smmu_combined_irq_thread(irq, dev);
return IRQ_WAKE_THREAD;
}
@@ -1358,11 +1394,49 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
arm_smmu_cmdq_issue_cmd(smmu, &cmd);
}
+static void arm_smmu_evtq_thread_xen(int irq, void *dev,
+ struct cpu_user_regs *regs)
+{
+ arm_smmu_evtq_thread(irq, dev);
+}
+
+static void arm_smmu_priq_thread_xen(int irq, void *dev,
+ struct cpu_user_regs *regs)
+{
+ arm_smmu_priq_thread(irq, dev);
+}
+
+static void arm_smmu_cmdq_sync_handler_xen(int irq, void *dev,
+ struct cpu_user_regs *regs)
+{
+ arm_smmu_cmdq_sync_handler(irq, dev);
+}
+
+static void arm_smmu_gerror_handler_xen(int irq, void *dev,
+ struct cpu_user_regs *regs)
+{
+ arm_smmu_gerror_handler(irq, dev);
+}
+
+static void arm_smmu_combined_irq_handler_xen(int irq, void *dev,
+ struct cpu_user_regs *regs)
+{
+ arm_smmu_combined_irq_handler(irq, dev);
+}
+
+#define arm_smmu_evtq_thread arm_smmu_evtq_thread_xen
+#define arm_smmu_priq_thread arm_smmu_priq_thread_xen
+#define arm_smmu_cmdq_sync_handler arm_smmu_cmdq_sync_handler_xen
+#define arm_smmu_gerror_handler arm_smmu_gerror_handler_xen
+#define arm_smmu_combined_irq_handler arm_smmu_combined_irq_handler_xen
+
+#if 0 /*Xen: Unused function */
static void arm_smmu_tlb_sync(void *cookie)
{
struct arm_smmu_domain *smmu_domain = cookie;
__arm_smmu_tlb_sync(smmu_domain->smmu);
}
+#endif
static void arm_smmu_tlb_inv_context(void *cookie)
{
@@ -1383,6 +1457,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
__arm_smmu_tlb_sync(smmu);
}
+#if 0 /*Xen: Unused functionality */
static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
size_t granule, bool leaf, void *cookie)
{
@@ -1427,6 +1502,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
return false;
}
}
+#endif
static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
{
@@ -1474,6 +1550,7 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx)
clear_bit(idx, map);
}
+#if 0
static void arm_smmu_domain_free(struct iommu_domain *domain)
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
@@ -1502,7 +1579,23 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
kfree(smmu_domain);
}
+#endif
+
+static void arm_smmu_domain_free(struct iommu_domain *domain)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+ /*
+ * Xen: Remove the free functions that are not used and code related
+ * to S1 translation. We just need to free the domain and vmid here.
+ */
+ if (cfg->vmid)
+ arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+ kfree(smmu_domain);
+}
+#if 0 /*Xen: The finalize domain functions are not needed in current form */
static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
struct io_pgtable_cfg *pgtbl_cfg)
{
@@ -1551,16 +1644,41 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
return 0;
}
+#endif
+
+static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain)
+{
+ int vmid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+
+ vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
+ if (vmid < 0)
+ return vmid;
+
+ /* Xen: Get the ttbr and vtcr values
+ * vttbr: This is a shared value with the domain page table
+ * vtcr: The TCR settings are the same as CPU since he page
+ * tables are shared
+ */
+
+ cfg->vmid = vmid;
+ cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root);
+ cfg->vtcr = READ_SYSREG32(VTCR_EL2) & STRTAB_STE_2_VTCR_MASK;
+ return 0;
+}
static int arm_smmu_domain_finalise(struct iommu_domain *domain)
{
int ret;
+#if 0 /* Xen: pgtbl_cfg not needed. So modify the function as needed */
unsigned long ias, oas;
enum io_pgtable_fmt fmt;
struct io_pgtable_cfg pgtbl_cfg;
struct io_pgtable_ops *pgtbl_ops;
int (*finalise_stage_fn)(struct arm_smmu_domain *,
struct io_pgtable_cfg *);
+#endif
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu;
@@ -1575,6 +1693,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+#if 0
switch (smmu_domain->stage) {
case ARM_SMMU_DOMAIN_S1:
ias = VA_BITS;
@@ -1616,7 +1735,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
if (ret < 0)
free_io_pgtable_ops(pgtbl_ops);
+#endif
+ ret = arm_smmu_domain_finalise_s2(smmu_domain);
return ret;
}
@@ -1709,7 +1830,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
ste->s1_cfg = &smmu_domain->s1_cfg;
ste->s2_cfg = NULL;
+#if 0 /*Xen: S1 configuratio not needed */
arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
+#endif
} else {
ste->s1_cfg = NULL;
ste->s2_cfg = &smmu_domain->s2_cfg;
@@ -1721,6 +1844,7 @@ out_unlock:
return ret;
}
+#if 0
static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
@@ -1772,6 +1896,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
put_device(dev);
return dev ? dev_get_drvdata(dev) : NULL;
}
+#endif
static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
{
@@ -1782,8 +1907,9 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
return sid < limit;
}
-
+#if 0
static struct iommu_ops arm_smmu_ops;
+#endif
static int arm_smmu_add_device(struct device *dev)
{
@@ -1791,9 +1917,12 @@ static int arm_smmu_add_device(struct device *dev)
struct arm_smmu_device *smmu;
struct arm_smmu_master_data *master;
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+#if 0 /*Xen: iommu_group is not needed */
struct iommu_group *group;
+#endif
- if (!fwspec || fwspec->ops != &arm_smmu_ops)
+ /* Xen: fwspec->ops are not needed */
+ if (!fwspec)
return -ENODEV;
/*
* We _can_ actually withstand dodgy bus code re-calling add_device()
@@ -1830,6 +1959,12 @@ static int arm_smmu_add_device(struct device *dev)
}
}
+#if 0
+/*
+ * Xen: Do not need an iommu group as the stream data is carried by the SMMU
+ * master device object
+ */
+
group = iommu_group_get_for_dev(dev);
if (!IS_ERR(group)) {
iommu_group_put(group);
@@ -1837,8 +1972,16 @@ static int arm_smmu_add_device(struct device *dev)
}
return PTR_ERR_OR_ZERO(group);
+#endif
+ return 0;
}
+/*
+ * Xen: We can potentially support this function and destroy a device. This
+ * will be relevant for PCI hotplug. So, will be implemented as needed after
+ * passthrough support is available.
+ */
+#if 0
static void arm_smmu_remove_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
@@ -1974,7 +2117,7 @@ static struct iommu_ops arm_smmu_ops = {
.put_resv_regions = arm_smmu_put_resv_regions,
.pgsize_bitmap = -1UL, /* Restricted during device attach */
};
-
+#endif
/* Probing and initialisation functions */
static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
struct arm_smmu_queue *q,
@@ -1984,13 +2127,19 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
{
size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
- q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
+ /* The SMMU cache coherency property is always set. Since we are sharing the CPU translation tables
+ * just make a regular allocation.
+ */
+ q->base = _xzalloc(qsz, sizeof(void *));
+
if (!q->base) {
dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
qsz);
return -ENOMEM;
}
+ q->base_dma = virt_to_maddr(q->base);
+
q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
q->ent_dwords = dwords;
@@ -2056,6 +2205,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
u64 reg;
u32 size, l1size;
struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+ u32 alignment;
/* Calculate the L1 size, capped to the SIDSIZE. */
size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
@@ -2069,14 +2219,17 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
size, smmu->sid_bits);
l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
- strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
- GFP_KERNEL | __GFP_ZERO);
+ alignment = max_t(u32, cfg->num_l1_ents, 64);
+ strtab = _xzalloc(l1size, l1size);
+
if (!strtab) {
dev_err(smmu->dev,
"failed to allocate l1 stream table (%u bytes)\n",
size);
return -ENOMEM;
}
+
+ cfg->strtab_dma = virt_to_maddr(strtab);
cfg->strtab = strtab;
/* Configure strtab_base_cfg for 2 levels */
@@ -2098,14 +2251,16 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
- strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
- GFP_KERNEL | __GFP_ZERO);
+ strtab = _xzalloc(size, size);
+
if (!strtab) {
dev_err(smmu->dev,
"failed to allocate linear stream table (%u bytes)\n",
size);
return -ENOMEM;
}
+
+ cfg->strtab_dma = virt_to_maddr(strtab);
cfg->strtab = strtab;
cfg->num_l1_ents = 1 << smmu->sid_bits;
@@ -2182,6 +2337,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
1, ARM_SMMU_POLL_TIMEOUT_US);
}
+#if 0 /* Xen: There is no MSI support as yet */
static void arm_smmu_free_msis(void *data)
{
struct device *dev = data;
@@ -2247,36 +2403,39 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
/* Add callback to free MSIs on teardown */
devm_add_action(dev, arm_smmu_free_msis, dev);
}
+#endif
static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
{
int irq, ret;
+#if 0 /*Xen: Cannot setup msis for now */
arm_smmu_setup_msis(smmu);
+#endif
/* Request interrupt lines */
irq = smmu->evtq.q.irq;
if (irq) {
- ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
- arm_smmu_evtq_thread,
- IRQF_ONESHOT,
- "arm-smmu-v3-evtq", smmu);
+ irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+ ret = request_irq(irq, arm_smmu_evtq_thread,
+ 0, "arm-smmu-v3-evtq", smmu);
if (ret < 0)
dev_warn(smmu->dev, "failed to enable evtq irq\n");
}
irq = smmu->cmdq.q.irq;
if (irq) {
- ret = devm_request_irq(smmu->dev, irq,
- arm_smmu_cmdq_sync_handler, 0,
- "arm-smmu-v3-cmdq-sync", smmu);
+ irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+ ret = request_irq(irq, arm_smmu_cmdq_sync_handler,
+ 0, "arm-smmu-v3-cmdq-sync", smmu);
if (ret < 0)
dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
}
irq = smmu->gerr_irq;
if (irq) {
- ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
+ irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+ ret = request_irq(irq, arm_smmu_gerror_handler,
0, "arm-smmu-v3-gerror", smmu);
if (ret < 0)
dev_warn(smmu->dev, "failed to enable gerror irq\n");
@@ -2284,12 +2443,13 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
if (smmu->features & ARM_SMMU_FEAT_PRI) {
irq = smmu->priq.q.irq;
+ irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
if (irq) {
- ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
- arm_smmu_priq_thread,
- IRQF_ONESHOT,
- "arm-smmu-v3-priq",
- smmu);
+ ret = request_irq(irq,
+ arm_smmu_priq_thread,
+ 0,
+ "arm-smmu-v3-priq",
+ smmu);
if (ret < 0)
dev_warn(smmu->dev,
"failed to enable priq irq\n");
@@ -2316,11 +2476,11 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
* Cavium ThunderX2 implementation doesn't not support unique
* irq lines. Use single irq line for all the SMMUv3 interrupts.
*/
- ret = devm_request_threaded_irq(smmu->dev, irq,
- arm_smmu_combined_irq_handler,
- arm_smmu_combined_irq_thread,
- IRQF_ONESHOT,
- "arm-smmu-v3-combined-irq", smmu);
+ ret = request_irq(irq,
+ arm_smmu_combined_irq_handler,
+ 0,
+ "arm-smmu-v3-combined-irq",
+ smmu);
if (ret < 0)
dev_warn(smmu->dev, "failed to enable combined irq\n");
} else
@@ -2542,8 +2702,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
smmu->features |= ARM_SMMU_FEAT_STALLS;
}
+#if 0/* Xen: Do not enable Stage 1 translations */
+
if (reg & IDR0_S1P)
smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
+#endif
if (reg & IDR0_S2P)
smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
@@ -2616,10 +2779,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
if (reg & IDR5_GRAN4K)
smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+#if 0 /* Xen: SMMU ops do not have a pgsize_bitmap member for Xen */
if (arm_smmu_ops.pgsize_bitmap == -1UL)
arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
else
arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
+#endif
/* Output address size */
switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
@@ -2646,10 +2811,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
smmu->oas = 48;
}
+#if 0 /* Xen: There is no support for DMA mask */
/* Set the DMA mask for our table walker */
if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
dev_warn(smmu->dev,
"failed to set DMA mask for table walker\n");
+#endif
smmu->ias = max(smmu->ias, smmu->oas);
@@ -2680,7 +2847,8 @@ static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
struct device *dev = smmu->dev;
struct acpi_iort_node *node;
- node = *(struct acpi_iort_node **)dev_get_platdata(dev);
+ /* Xen: Modification to get iort_node */
+ node = (struct acpi_iort_node *)dev->acpi_node;
/* Retrieve SMMUv3 specific data */
iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
@@ -2703,7 +2871,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
static int arm_smmu_device_dt_probe(struct platform_device *pdev,
struct arm_smmu_device *smmu)
{
- struct device *dev = &pdev->dev;
+ struct device *dev = pdev;
u32 cells;
int ret = -EINVAL;
@@ -2716,8 +2884,8 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
parse_driver_options(smmu);
- if (of_dma_is_coherent(dev->of_node))
- smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+ /* Xen: Set the COHERNECY feature */
+ smmu->features |= ARM_SMMU_FEAT_COHERENCY;
return ret;
}
@@ -2734,9 +2902,11 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
{
int irq, ret;
struct resource *res;
+#if 0 /*Xen: Do not need to setup sysfs */
resource_size_t ioaddr;
+#endif
struct arm_smmu_device *smmu;
- struct device *dev = &pdev->dev;
+ struct device *dev = pdev;/* Xen: dev is ignored */
bool bypass;
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
@@ -2763,8 +2933,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
dev_err(dev, "MMIO region too small (%pr)\n", res);
return -EINVAL;
}
+#if 0 /*Xen: Do not need to setup sysfs */
ioaddr = res->start;
-
+#endif
smmu->base = devm_ioremap_resource(dev, res);
if (IS_ERR(smmu->base))
return PTR_ERR(smmu->base);
@@ -2802,13 +2973,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
return ret;
/* Record our private device structure */
+#if 0 /* Xen: SMMU is not treated a a platform device*/
platform_set_drvdata(pdev, smmu);
-
+#endif
/* Reset the device */
ret = arm_smmu_device_reset(smmu, bypass);
if (ret)
return ret;
+/* Xen: Not creating an IOMMU device list for Xen */
+#if 0
/* And we're up. Go go go! */
ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
"smmu3.%pa", &ioaddr);
@@ -2844,9 +3018,18 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
if (ret)
return ret;
}
+#endif
+ /*
+ * Xen: Keep a list of all probed devices. This will be used to query
+ * the smmu devices based on the fwnode.
+ */
+ INIT_LIST_HEAD(&smmu->devices);
+ spin_lock(&arm_smmu_devices_lock);
+ list_add(&smmu->devices, &arm_smmu_devices);
+ spin_unlock(&arm_smmu_devices_lock);
return 0;
}
-
+#if 0
static int arm_smmu_device_remove(struct platform_device *pdev)
{
struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
@@ -2860,6 +3043,10 @@ static void arm_smmu_device_shutdown(struct platform_device *pdev)
{
arm_smmu_device_remove(pdev);
}
+#endif
+
+#define MODULE_DEVICE_TABLE(type, name)
+#define of_device_id dt_device_match
static const struct of_device_id arm_smmu_of_match[] = {
{ .compatible = "arm,smmu-v3", },
@@ -2867,6 +3054,7 @@ static const struct of_device_id arm_smmu_of_match[] = {
};
MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
+#if 0
static struct platform_driver arm_smmu_driver = {
.driver = {
.name = "arm-smmu-v3",
@@ -2883,3 +3071,318 @@ IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
MODULE_LICENSE("GPL v2");
+#endif
+
+/***** Start of Xen specific code *****/
+
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
+{
+ struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
+ struct iommu_domain *cfg;
+
+ spin_lock(&smmu_domain->lock);
+ list_for_each_entry(cfg, &smmu_domain->iommu_domains, list) {
+ /*
+ * Only invalidate the context when SMMU is present.
+ * This is because the context initialization is delayed
+ * until a master has been added.
+ */
+ if (unlikely(!ACCESS_ONCE(cfg->priv->smmu)))
+ continue;
+ arm_smmu_tlb_inv_context(cfg->priv);
+ }
+ spin_unlock(&smmu_domain->lock);
+ return 0;
+}
+
+static int __must_check arm_smmu_iotlb_flush(struct domain *d,
+ unsigned long gfn,
+ unsigned int page_count)
+{
+ return arm_smmu_iotlb_flush_all(d);
+}
+
+static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
+ struct device *dev)
+{
+ struct iommu_domain *domain;
+ struct arm_smmu_xen_domain *xen_domain;
+ struct arm_smmu_device *smmu;
+ struct arm_smmu_domain *smmu_domain;
+
+ xen_domain = dom_iommu(d)->arch.priv;
+
+ smmu = arm_smmu_get_by_fwnode(dev->iommu_fwspec->iommu_fwnode);
+ if (!smmu)
+ return NULL;
+
+ /*
+ * Loop through the &xen_domain->contexts to locate a context
+ * assigned to this SMMU
+ */
+ list_for_each_entry(domain, &xen_domain->iommu_domains, list) {
+ smmu_domain = to_smmu_domain(domain);
+ if (smmu_domain->smmu == smmu)
+ return domain;
+ }
+
+ return NULL;
+}
+
+static void arm_smmu_destroy_iommu_domain(struct iommu_domain *domain)
+{
+ list_del(&domain->list);
+ arm_smmu_domain_free(domain);
+}
+
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+ struct device *dev, u32 flag)
+{
+ int ret = 0;
+ struct iommu_domain *domain;
+ struct arm_smmu_xen_domain *xen_domain;
+ struct arm_smmu_domain *arm_smmu;
+
+ xen_domain = dom_iommu(d)->arch.priv;
+
+ if (!dev->archdata.iommu) {
+ dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
+ if (!dev->archdata.iommu)
+ return -ENOMEM;
+ }
+
+ ret = arm_smmu_add_device(dev);
+ if (ret)
+ return ret;
+
+ spin_lock(&xen_domain->lock);
+
+ /*
+ * Check to see if an iommu_domain already exists for this xen domain
+ * under the same SMMU
+ */
+ domain = arm_smmu_get_domain(d, dev);
+ if (!domain) {
+
+ domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
+ if (!domain) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ arm_smmu = to_smmu_domain(domain);
+ arm_smmu->s2_cfg.domain = d;
+
+ /* Chain the new context to the domain */
+ list_add(&domain->list, &xen_domain->iommu_domains);
+
+ }
+
+ ret = arm_smmu_attach_dev(domain, dev);
+ if (ret) {
+ if (domain->ref.counter == 0)
+ arm_smmu_destroy_iommu_domain(domain);
+ } else {
+ atomic_inc(&domain->ref);
+ }
+
+out:
+ spin_unlock(&xen_domain->lock);
+ return ret;
+}
+
+static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+{
+ struct iommu_domain *domain = arm_smmu_get_domain(d, dev);
+ struct arm_smmu_xen_domain *xen_domain;
+ struct arm_smmu_domain *arm_smmu = to_smmu_domain(domain);
+
+ xen_domain = dom_iommu(d)->arch.priv;
+
+ if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
+ dev_err(dev, " not attached to domain %d\n", d->domain_id);
+ return -ESRCH;
+ }
+
+ spin_lock(&xen_domain->lock);
+
+ arm_smmu_detach_dev(dev);
+ atomic_dec(&domain->ref);
+
+ if (domain->ref.counter == 0)
+ arm_smmu_destroy_iommu_domain(domain);
+
+ spin_unlock(&xen_domain->lock);
+
+
+
+ return 0;
+}
+
+static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
+ u8 devfn, struct device *dev)
+{
+ int ret = 0;
+
+ /* Don't allow remapping on other domain than hwdom */
+ if (t && t != hardware_domain)
+ return -EPERM;
+
+ if (t == s)
+ return 0;
+
+ ret = arm_smmu_deassign_dev(s, dev);
+ if (ret)
+ return ret;
+
+ if (t) {
+ /* No flags are defined for ARM. */
+ ret = arm_smmu_assign_dev(t, devfn, dev, 0);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int arm_smmu_iommu_domain_init(struct domain *d)
+{
+ struct arm_smmu_xen_domain *xen_domain;
+
+ xen_domain = xzalloc(struct arm_smmu_xen_domain);
+ if (!xen_domain)
+ return -ENOMEM;
+
+ spin_lock_init(&xen_domain->lock);
+ INIT_LIST_HEAD(&xen_domain->iommu_domains);
+
+ dom_iommu(d)->arch.priv = xen_domain;
+
+ return 0;
+}
+
+static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
+{
+}
+
+static void arm_smmu_iommu_domain_teardown(struct domain *d)
+{
+ struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+
+ ASSERT(list_empty(&xen_domain->iommu_domains));
+ xfree(xen_domain);
+}
+
+static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
+ unsigned long mfn, unsigned int flags)
+{
+ p2m_type_t t;
+
+ /*
+ * Grant mappings can be used for DMA requests. The dev_bus_addr
+ * returned by the hypercall is the MFN (not the IPA). For device
+ * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
+ * p2m to allow DMA request to work.
+ * This is only valid when the domain is directed mapped. Hence this
+ * function should only be used by gnttab code with gfn == mfn.
+ */
+ BUG_ON(!is_domain_direct_mapped(d));
+ BUG_ON(mfn != gfn);
+
+ /* We only support readable and writable flags */
+ if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
+ return -EINVAL;
+
+ t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
+
+ /*
+ * The function guest_physmap_add_entry replaces the current mapping
+ * if there is already one...
+ */
+ return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
+}
+
+static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+{
+ /*
+ * This function should only be used by gnttab code when the domain
+ * is direct mapped
+ */
+ if (!is_domain_direct_mapped(d))
+ return -EINVAL;
+
+ return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+ .init = arm_smmu_iommu_domain_init,
+ .hwdom_init = arm_smmu_iommu_hwdom_init,
+ .teardown = arm_smmu_iommu_domain_teardown,
+ .iotlb_flush = arm_smmu_iotlb_flush,
+ .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+ .assign_device = arm_smmu_assign_dev,
+ .reassign_device = arm_smmu_reassign_dev,
+ .map_page = arm_smmu_map_page,
+ .unmap_page = arm_smmu_unmap_page,
+};
+
+static
+struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+ struct arm_smmu_device *smmu = NULL;
+
+ spin_lock(&arm_smmu_devices_lock);
+ list_for_each_entry(smmu, &arm_smmu_devices, devices) {
+ if (smmu->dev->fwnode == fwnode)
+ break;
+ }
+ spin_unlock(&arm_smmu_devices_lock);
+
+ return smmu;
+}
+
+static __init int arm_smmu_dt_init(struct dt_device_node *dev,
+ const void *data)
+{
+ int rc;
+
+ /*
+ * Even if the device can't be initialized, we don't want to
+ * give the SMMU device to dom0.
+ */
+ dt_device_set_used_by(dev, DOMID_XEN);
+
+ rc = arm_smmu_device_probe(dt_to_dev(dev));
+ if (rc)
+ return rc;
+
+ iommu_set_ops(&arm_smmu_iommu_ops);
+
+ return 0;
+}
+
+DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
+ .dt_match = arm_smmu_of_match,
+ .init = arm_smmu_dt_init,
+DT_DEVICE_END
+
+#ifdef CONFIG_ACPI
+/* Set up the IOMMU */
+static int __init arm_smmu_acpi_init(const void *data)
+{
+ int rc;
+ rc = arm_smmu_device_probe((struct device *)data);
+
+ if (rc)
+ return rc;
+
+ iommu_set_ops(&arm_smmu_iommu_ops);
+ return 0;
+}
+
+ACPI_DEVICE_START(asmmuv3, "ARM SMMU V3", DEVICE_IOMMU)
+ .class_type = ACPI_IORT_NODE_SMMU_V3,
+ .init = arm_smmu_acpi_init,
+ACPI_DEVICE_END
+
+#endif
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [RFC v3 1/4] Port WARN_ON_ONCE() from Linux
2017-12-05 3:59 ` [RFC v3 1/4] Port WARN_ON_ONCE() from Linux Sameer Goel
@ 2017-12-05 9:18 ` Jan Beulich
0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2017-12-05 9:18 UTC (permalink / raw)
To: Sameer Goel
Cc: sstabellini, wei.liu2, mjaggi, george.dunlap, Andrew.Cooper3,
julien.grall, xen-devel, Ian.Jackson, nd, shankerd
>>> On 05.12.17 at 04:59, <sgoel@codeaurora.org> wrote:
> Port WARN_ON_ONCE macro from Linux. A return value is expected from this
> macro, so the implementation does not follow the Xen convention of wrapping
> macros in a do..while.
There's no such convention for macros producing a value.
> ---
Missing S-o-b.
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -11,6 +11,17 @@
> #define BUG_ON(p) do { if (unlikely(p)) BUG(); } while (0)
> #define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
>
> +#define WARN_ON_ONCE(p) ({ \
> + static bool __section(".data.unlikely") __warned; \
I think this will need an addition to xen.lds.S (both x86 and ARM).
> + int __ret_warn_once = !!(p); \
bool and please don't use leading underscores for identifiers when
those are in conflict with the C spec.
> + if (unlikely(__ret_warn_once && !__warned)) { \
I don't think using likely() / unlikely() on expressions involving && or
|| is ever a useful thing - in the case here you really mean
if (unlikely(__ret_warn_once) && unlikely(!__warned)) {
> + __warned = true; \
> + WARN_ON(1); \
WARN()
I also think that you would better use Xen style here, despite
BUG_ON() and WARN_ON() themselves slightly violating this. The
file clearly is not a Linux derived file (anymore).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 2/4] xen/linux_compat: Add a Linux compat header
2017-12-05 3:59 ` [RFC v3 2/4] xen/linux_compat: Add a Linux compat header Sameer Goel
@ 2017-12-05 9:20 ` Jan Beulich
2017-12-05 11:37 ` Julien Grall
2017-12-05 12:31 ` Julien Grall
1 sibling, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2017-12-05 9:20 UTC (permalink / raw)
To: Sameer Goel
Cc: sstabellini, wei.liu2, mjaggi, george.dunlap, Andrew.Cooper3,
julien.grall, xen-devel, Ian.Jackson, nd, shankerd
>>> On 05.12.17 at 04:59, <sgoel@codeaurora.org> wrote:
> For porting files from Linux it is useful to have a Linux API to Xen API
> mapping header at a common location.
Looking at what you add here I really think "no, please don't". But
let's see what other maintainers thinks.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 2/4] xen/linux_compat: Add a Linux compat header
2017-12-05 9:20 ` Jan Beulich
@ 2017-12-05 11:37 ` Julien Grall
0 siblings, 0 replies; 19+ messages in thread
From: Julien Grall @ 2017-12-05 11:37 UTC (permalink / raw)
To: Jan Beulich, Sameer Goel
Cc: sstabellini, wei.liu2, mjaggi, george.dunlap, Andrew.Cooper3,
julien.grall, Ian.Jackson, xen-devel, nd, shankerd
Hi Jan,
On 05/12/17 09:20, Jan Beulich wrote:
>>>> On 05.12.17 at 04:59, <sgoel@codeaurora.org> wrote:
>> For porting files from Linux it is useful to have a Linux API to Xen API
>> mapping header at a common location.
>
> Looking at what you add here I really think "no, please don't". But
> let's see what other maintainers thinks.
Most of those helpers are necessary when bringing non-modified from Linux.
We already have some drivers providing their own wrapper (e.g SMMUv2)
and I would expect more in the future. Actually there are another series
in the ML adding similar wrappers (see [1]).
So I think this is the right time to think about a common helpers. This
will avoid duplication and make easier port. However, I don't agree with
all the wrappers set in this file. I will comment on the patch.
Cheers,
[1]
https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg00666.html
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 2/4] xen/linux_compat: Add a Linux compat header
2017-12-05 3:59 ` [RFC v3 2/4] xen/linux_compat: Add a Linux compat header Sameer Goel
2017-12-05 9:20 ` Jan Beulich
@ 2017-12-05 12:31 ` Julien Grall
2017-12-15 22:32 ` Goel, Sameer
1 sibling, 1 reply; 19+ messages in thread
From: Julien Grall @ 2017-12-05 12:31 UTC (permalink / raw)
To: Sameer Goel, xen-devel, julien.grall, mjaggi
Cc: sstabellini, wei.liu2, george.dunlap, Andrew.Cooper3, jbeulich,
Ian.Jackson, nd, shankerd
Hi Sameer,
On 05/12/17 03:59, Sameer Goel wrote:
> For porting files from Linux it is useful to have a Linux API to Xen API
> mapping header at a common location.
> This file adds common API functions and other defines that are needed for
> porting arm SMMU drivers.
>
> ---
> xen/include/xen/linux_compat.h | 106 +++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 106 insertions(+)
> create mode 100644 xen/include/xen/linux_compat.h
>
> diff --git a/xen/include/xen/linux_compat.h b/xen/include/xen/linux_compat.h
> new file mode 100644
> index 0000000..217e0cc
> --- /dev/null
> +++ b/xen/include/xen/linux_compat.h
> @@ -0,0 +1,106 @@
> +/******************************************************************************
> + * include/xen/linux_compat.h
> + *
> + * Compatibility defines for porting code from Linux to Xen
> + *
> + * Copyright (c) 2017 Linaro Limited
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __XEN_LINUX_COMPAT_H__
> +#define __XEN_LINUX_COMPAT_H__
> +
> +#include <asm/types.h>
> +
> +typedef paddr_t phys_addr_t;
> +typedef paddr_t dma_addr_t;
> +
> +/* Alias to Xen device tree helpers */
> +#define device_node dt_device_node
> +#define of_phandle_args dt_phandle_args
> +#define of_device_id dt_device_match
> +#define of_match_node dt_match_node
> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
> +#define of_property_read_bool dt_property_read_bool
> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
> +/* The user should consider if it is safe to treat mutex as a spinlock */
I am against defining mutex as spinlock in a generic header. People will
overlook it and it is hardly going to be detected in a verbatim port.
This should be done on the case by case basis.
> +#define mutex spinlock_t
> +#define mutex_init spin_lock_init
> +#define mutex_lock spin_lock
> +#define mutex_unlock spin_unlock
> +
> +#define ilog2 LOG_2
There is only one user of LOG_2 in Xen. So wouldn't it be better to
rename directly to ilog2?
> +
> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
> +({ \
> + s_time_t deadline = NOW() + MICROSECS(timeout_us); \
> + for (;;) \
> + { \
> + (val) = op(addr); \
> + if ( cond ) \
> + break; \
> + if ( NOW() > deadline ) \
> + { \
> + (val) = op(addr); \
> + break; \
> + } \
> + udelay(sleep_us); \
> + } \
> + (cond) ? 0 : -ETIMEDOUT; \
> +})
> +
> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
> + readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
I don't think putting read* macros in a common header is necessary.
Their use in Linux is very limited.
> +
> +/* Xen: Helpers for IRQ functions */
> +#define request_irq(irq, func, flags, name, dev) request_irq(irq, flags, func, name, dev)
> +#define free_irq release_irq
> +
> +enum irqreturn {
> + IRQ_NONE = (0 << 0),
> + IRQ_HANDLED = (1 << 0),
> + IRQ_WAKE_THREAD = (2 << 0),
> +};
> +
> +typedef enum irqreturn irqreturn_t;
> +
> +/* Device logger functions */
> +#define dev_print(dev, lvl, fmt, ...) \
> + printk(lvl fmt, ## __VA_ARGS__)
> +
> +#define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
> +#define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
> +#define dev_warn(dev, fmt, ...) dev_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
> +#define dev_err(dev, fmt, ...) dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
> +#define dev_info(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
> +
> +#define dev_err_ratelimited(dev, fmt, ...) \
> + dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
> +
> +#define dev_name(dev) dt_node_full_name(dev_to_dt(dev))
> +
> +/* Alias to Xen allocation helpers */
> +#define kfree xfree
> +#define kmalloc(size, flags) _xmalloc(size, sizeof(void *))
> +#define kzalloc(size, flags) _xzalloc(size, sizeof(void *))
> +#define devm_kzalloc(dev, size, flags) _xzalloc(size, sizeof(void *))
> +#define kmalloc_array(size, n, flags) _xmalloc_array(size, sizeof(void *), n)
> +
> +/* Alias to Xen time functions */
> +#define ktime_t s_time_t
> +#define ktime_add_us(t,i) (NOW() + MICROSECS(i))
> +#define ktime_compare(t,i) (NOW() > (i))
> +
> +#endif /* __XEN_LINUX_COMPAT_H__ */
>
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 3:59 ` [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver Sameer Goel
@ 2017-12-05 14:17 ` Julien Grall
2017-12-05 23:26 ` Goel, Sameer
` (2 more replies)
2017-12-12 8:09 ` Manish Jaggi
1 sibling, 3 replies; 19+ messages in thread
From: Julien Grall @ 2017-12-05 14:17 UTC (permalink / raw)
To: Sameer Goel, xen-devel, julien.grall, mjaggi; +Cc: sstabellini, shankerd
Hello,
On 05/12/17 03:59, Sameer Goel wrote:
> This driver follows an approach similar to smmu driver. The intent here
> is to reuse as much Linux code as possible.
> - Glue code has been introduced in headers to bridge the API calls.
> - Called Linux functions from the Xen IOMMU function calls.
> - Xen modifications are preceded by /*Xen: comment */
> - New config items for SMMUv3 and legacy SMMU have been defined.
There are no reason to touch legacy SMMU in this patch. Please move that
outside of it.
>
> Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
> ---
> xen/drivers/Kconfig | 2 +
> xen/drivers/passthrough/arm/Kconfig | 14 +
> xen/drivers/passthrough/arm/Makefile | 3 +-
> xen/drivers/passthrough/arm/arm_smmu.h | 189 ++++++++++
> xen/drivers/passthrough/arm/smmu-v3.c | 619 ++++++++++++++++++++++++++++++---
> 5 files changed, 768 insertions(+), 59 deletions(-)
> create mode 100644 xen/drivers/passthrough/arm/Kconfig
> create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
>
> diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
> index bc3a54f..6126553 100644
> --- a/xen/drivers/Kconfig
> +++ b/xen/drivers/Kconfig
> @@ -12,4 +12,6 @@ source "drivers/pci/Kconfig"
>
> source "drivers/video/Kconfig"
>
> +source "drivers/passthrough/arm/Kconfig"
> +
> endmenu
> diff --git a/xen/drivers/passthrough/arm/Kconfig b/xen/drivers/passthrough/arm/Kconfig
> new file mode 100644
> index 0000000..9ac4cea
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/Kconfig
> @@ -0,0 +1,14 @@
> +
> +config ARM_SMMU
> + bool "ARM SMMU v1/2 support"
> + depends on ARM_64
Why? SMMUv1 and SMMUv2 supports Arm 32-bit.
> + help
> + Support for implementations of the ARM System MMU architecture. (1/2)
I am not sure to understand the (1/2) after the final point.
> +
> +config ARM_SMMU_v3
> + bool "ARM SMMUv3 Support"
> + depends on ARM_64
> + help
> + Support for implementations of the ARM System MMU architecture
> + version 3.
> +
> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
> index f4cd26e..5b3eb15 100644
> --- a/xen/drivers/passthrough/arm/Makefile
> +++ b/xen/drivers/passthrough/arm/Makefile
> @@ -1,2 +1,3 @@
> obj-y += iommu.o
> -obj-y += smmu.o
> +obj-$(CONFIG_ARM_SMMU) += smmu.o
> +obj-$(CONFIG_ARM_SMMU_v3) += smmu-v3.o
> diff --git a/xen/drivers/passthrough/arm/arm_smmu.h b/xen/drivers/passthrough/arm/arm_smmu.h
> new file mode 100644
> index 0000000..b5e161f
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/arm_smmu.h
I don't think there are any value to use Linux coding style in this
header. It contains Xen stubs.
I would also have expected this new file to come in a separate patch
with the modification associated in SMMUv2. This would make easier to
see what could be common.
> @@ -0,0 +1,189 @@
> +/******************************************************************************
> + * ./arm_smmu.h
> + *
> + * Common compatibility defines and data_structures for porting arm smmu
> + * drivers from Linux.
[...]
> +static struct resource *platform_get_resource(struct platform_device *pdev,
> + unsigned int type,
> + unsigned int num)
> +{
> + /*
> + * The resource is only used between 2 calls of platform_get_resource.
> + * It's quite ugly but it's avoid to add too much code in the part
> + * imported from Linux
> + */
> + static struct resource res;
> + struct acpi_iort_node *iort_node;
> + struct acpi_iort_smmu_v3 *node_smmu_data;
> + int ret = 0;
> +
> + res.type = type;
> +
> + switch (type) {
> + case IORESOURCE_MEM:
> + if (pdev->type == DEV_ACPI) {
> + ret = 1;
> + iort_node = pdev->acpi_node;
> + node_smmu_data =
> + (struct acpi_iort_smmu_v3 *)iort_node->node_data;
Above you say: "Common compatibility defines and data_structures for
porting arm smmu driver from Linux". But this code is clearly SMMUv3.
> +
> + if (node_smmu_data != NULL) {
> + res.addr = node_smmu_data->base_address;
> + res.size = SZ_128K;
> + ret = 0;
> + }
> + } else {
> + ret = dt_device_get_address(dev_to_dt(pdev), num,
> + &res.addr, &res.size);
> + }
> +
> + return ((ret) ? NULL : &res);
> +
> + case IORESOURCE_IRQ:
> + /* ACPI case not implemented as there is no use case for it */
> + ret = platform_get_irq(dev_to_dt(pdev), num);
> +
> + if (ret < 0)
> + return NULL;
> +
> + res.addr = ret;
> + res.size = 1;
> +
> + return &res;
> +
> + default:
> + return NULL;
> + }
> +}
> +
> +static int platform_get_irq_byname(struct platform_device *pdev, const char *name)
> +{
> + const struct dt_property *dtprop;
> + struct acpi_iort_node *iort_node;
> + struct acpi_iort_smmu_v3 *node_smmu_data;
> + int ret = 0;
> +
> + if (pdev->type == DEV_ACPI) {
> + iort_node = pdev->acpi_node;
> + node_smmu_data = (struct acpi_iort_smmu_v3 *)iort_node->node_data;
Ditto.
> +
> + if (node_smmu_data != NULL) {
> + if (!strcmp(name, "eventq"))
> + ret = node_smmu_data->event_gsiv;
> + else if (!strcmp(name, "priq"))
> + ret = node_smmu_data->pri_gsiv;
> + else if (!strcmp(name, "cmdq-sync"))
> + ret = node_smmu_data->sync_gsiv;
> + else if (!strcmp(name, "gerror"))
> + ret = node_smmu_data->gerr_gsiv;
> + else
> + ret = -EINVAL;
> + }
> + } else {
> + dtprop = dt_find_property(dev_to_dt(pdev), "interrupt-names", NULL);
> + if (!dtprop)
> + return -EINVAL;
> +
> + if (!dtprop->value)
> + return -ENODATA;
> + }
> +
> + return ret;
> +}
> +
> +/* Xen: Stub out DMA domain related functions */
I don't think 'Xen:' is necessary as this file contains Xen stubs.
> +#define iommu_get_dma_cookie(dom) 0
> +#define iommu_put_dma_cookie(dom) 0
> +
> +static void __iomem *devm_ioremap_resource(struct device *dev,
> + struct resource *res)
> +{
> + void __iomem *ptr;
> +
> + if (!res || res->type != IORESOURCE_MEM) {
> + dev_err(dev, "Invalid resource\n");
> + return ERR_PTR(-EINVAL);
> + }
> +
> + ptr = ioremap_nocache(res->addr, res->size);
> + if (!ptr) {
> + dev_err(dev,
> + "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
> + res->addr, res->size);
> + return ERR_PTR(-ENOMEM);
> + }
> +
> + return ptr;
> +}
> +
> +/* Xen: Dummy iommu_domain */
> +struct iommu_domain {
> + /* Runtime SMMU configuration for this iommu_domain */
> + struct arm_smmu_domain *priv;
> + unsigned int type;
What are the values for type?
> +
> + atomic_t ref;
> + /* Used to link iommu_domain contexts for a same domain.
/*
* Used ...
*/
> + * There is at least one per-SMMU to used by the domain.
> + */
> + struct list_head list;
> +};
> +/* Xen: Domain type definitions. Not really needed for Xen, defining to port
/*
* Xen: ...
> + * Linux code as-is
> + */
> +#define IOMMU_DOMAIN_UNMANAGED 0
> +#define IOMMU_DOMAIN_DMA 1
> +#define IOMMU_DOMAIN_IDENTITY 2
> +
> +/* Xen: Describes information required for a Xen domain */
> +struct arm_smmu_xen_domain {
> + spinlock_t lock;
> + /* List of iommu domains associated to this domain */
> + struct list_head iommu_domains;
> +};
> +
> +/*
> + * Xen: Information about each device stored in dev->archdata.iommu
> + *
> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
> + * the SMMU).
> + */
> +struct arm_smmu_xen_device {
> + struct iommu_domain *domain;
> +};
> +
> +#endif /* __ARM_SMMU_H__ */
Missing emacs magic.
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index e67ba6c..c6c1b99 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -18,28 +18,38 @@
> * Author: Will Deacon <will.deacon@arm.com>
> *
> * This driver is powered by bad coffee and bombay mix.
> + *
> + *
> + * Based on Linux drivers/iommu/arm-smmu-v3.c
> + * => commit 7aa8619a66aea52b145e04cbab4f8d6a4e5f3f3b
> + *
> + * Xen modifications:
> + * Sameer Goel <sameer.goel@linaro.org>
> + * Copyright (C) 2017, The Linux Foundation, All rights reserved.
> + *
> */
>
> -#include <linux/acpi.h>
> -#include <linux/acpi_iort.h>
> -#include <linux/delay.h>
> -#include <linux/dma-iommu.h>
> -#include <linux/err.h>
> -#include <linux/interrupt.h>
> -#include <linux/iommu.h>
> -#include <linux/iopoll.h>
> -#include <linux/module.h>
> -#include <linux/msi.h>
> -#include <linux/of.h>
> -#include <linux/of_address.h>
> -#include <linux/of_iommu.h>
> -#include <linux/of_platform.h>
> -#include <linux/pci.h>
> -#include <linux/platform_device.h>
> -
> -#include <linux/amba/bus.h>
> -
> -#include "io-pgtable.h"
> +#include <xen/acpi.h>
> +#include <xen/config.h>
> +#include <xen/delay.h>
> +#include <xen/errno.h>
> +#include <xen/err.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/linux_compat.h>
> +#include <xen/list.h>
> +#include <xen/mm.h>
> +#include <xen/rbtree.h>
> +#include <xen/sched.h>
> +#include <xen/sizes.h>
> +#include <xen/vmap.h>
> +#include <acpi/acpi_iort.h>
> +#include <asm/atomic.h>
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <asm/platform.h>
> +
> +#include "arm_smmu.h" /* Not a self contained header. So last in the list */
>
> /* MMIO registers */
> #define ARM_SMMU_IDR0 0x0
> @@ -423,9 +433,12 @@
> #endif
>
> static bool disable_bypass;
> +
> +#if 0 /* Xen: Not applicable for Xen */
> module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
> MODULE_PARM_DESC(disable_bypass,
> "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
> +#endif
Can't you stub module_param_namde and MODULE_PARM_DESC to avoid #if 0?
>
> enum pri_resp {
> PRI_RESP_DENY,
> @@ -433,6 +446,7 @@ enum pri_resp {
> PRI_RESP_SUCC,
> };
>
> +#if 0 /* Xen: No MSI support in this iteration */
> enum arm_smmu_msi_index {
> EVTQ_MSI_INDEX,
> GERROR_MSI_INDEX,
> @@ -457,6 +471,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
> ARM_SMMU_PRIQ_IRQ_CFG2,
> },
> };
> +#endif
>
> struct arm_smmu_cmdq_ent {
> /* Common fields */
> @@ -561,6 +576,8 @@ struct arm_smmu_s2_cfg {
> u16 vmid;
> u64 vttbr;
> u64 vtcr;
> + /* Xen: Domain associated to this configuration */
> + struct domain *domain;
> };
>
> struct arm_smmu_strtab_ent {
> @@ -635,9 +652,21 @@ struct arm_smmu_device {
> struct arm_smmu_strtab_cfg strtab_cfg;
>
> /* IOMMU core code handle */
> +#if 0 /*Xen: Generic iommu_device ref not needed here */
> struct iommu_device iommu;
> +#endif
> + /* Xen: Need to keep a list of SMMU devices */
> + struct list_head devices;
> };
>
> +/* Xen: Keep a list of devices associated with this driver */
> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
> +static LIST_HEAD(arm_smmu_devices);
> +/* Xen: Helper for finding a device using fwnode */
> +static
> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode);
> +
> +
> /* SMMU private data for each master */
> struct arm_smmu_master_data {
> struct arm_smmu_device *smmu;
> @@ -654,7 +683,7 @@ enum arm_smmu_domain_stage {
>
> struct arm_smmu_domain {
> struct arm_smmu_device *smmu;
> - struct mutex init_mutex; /* Protects smmu pointer */
> + mutex init_mutex; /* Protects smmu pointer */
>
> struct io_pgtable_ops *pgtbl_ops;
>
> @@ -961,6 +990,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> }
>
> +#if 0 /*Xen: Comment out functions that set up S1 translations */
Why? I do agree that the code will not be used by Xen, but I would
prefer if you minimize the number of #ifdef.
> /* Context descriptor manipulation functions */
> static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
> {
> @@ -1003,6 +1033,7 @@ static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
>
> cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
> }
> +#endif
>
> /* Stream table manipulation functions */
> static void
> @@ -1164,6 +1195,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
> void *strtab;
> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
> + u32 alignment = 0;
It is not necassary to initialize alignment. Also we are trying to limit
the use of u* in favor of uint32_t.
>
> if (desc->l2ptr)
> return 0;
> @@ -1172,14 +1204,16 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
> strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
>
> desc->span = STRTAB_SPLIT + 1;
> - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
> - GFP_KERNEL | __GFP_ZERO);
> + /* Alignment picked from ARM SMMU arch version 3.x. L1ST.L2Ptr */
> + alignment = 1 << ((5 + (desc->span - 1)));
> + desc->l2ptr = _xzalloc(size, alignment);
> if (!desc->l2ptr) {
> dev_err(smmu->dev,
> "failed to allocate l2 stream table for SID %u\n",
> sid);
> return -ENOMEM;
> }
> + desc->l2ptr_dma = virt_to_maddr(desc->l2ptr);
>
> arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
> arm_smmu_write_strtab_l1_desc(strtab, desc);
> @@ -1232,7 +1266,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>
> dev_info(smmu->dev, "unexpected PRI request received:\n");
> dev_info(smmu->dev,
> - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
> + "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova %#" PRIx64 "\n",
> sid, ssid, grpid, last ? "L" : "",
> evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
> evt[0] & PRIQ_0_PERM_READ ? "R" : "",
> @@ -1346,6 +1380,8 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
> {
> arm_smmu_gerror_handler(irq, dev);
> arm_smmu_cmdq_sync_handler(irq, dev);
> + /*Xen: No threaded irq. So call the required function from here */
> + arm_smmu_combined_irq_thread(irq, dev);
> return IRQ_WAKE_THREAD;
> }
>
> @@ -1358,11 +1394,49 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
> arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> }
>
> +static void arm_smmu_evtq_thread_xen(int irq, void *dev,
> + struct cpu_user_regs *regs)
> +{
> + arm_smmu_evtq_thread(irq, dev);
> +}
> +
> +static void arm_smmu_priq_thread_xen(int irq, void *dev,
> + struct cpu_user_regs *regs)
> +{
> + arm_smmu_priq_thread(irq, dev);
> +}
> +
> +static void arm_smmu_cmdq_sync_handler_xen(int irq, void *dev,
> + struct cpu_user_regs *regs)
> +{
> + arm_smmu_cmdq_sync_handler(irq, dev);
> +}
> +
> +static void arm_smmu_gerror_handler_xen(int irq, void *dev,
> + struct cpu_user_regs *regs)
> +{
> + arm_smmu_gerror_handler(irq, dev);
> +}
> +
> +static void arm_smmu_combined_irq_handler_xen(int irq, void *dev,
> + struct cpu_user_regs *regs)
> +{
> + arm_smmu_combined_irq_handler(irq, dev);
> +}
> +
Missing:
/* Xen: .... */
> +#define arm_smmu_evtq_thread arm_smmu_evtq_thread_xen
> +#define arm_smmu_priq_thread arm_smmu_priq_thread_xen
> +#define arm_smmu_cmdq_sync_handler arm_smmu_cmdq_sync_handler_xen
> +#define arm_smmu_gerror_handler arm_smmu_gerror_handler_xen
> +#define arm_smmu_combined_irq_handler arm_smmu_combined_irq_handler_xen
> +
> +#if 0 /*Xen: Unused function */
> static void arm_smmu_tlb_sync(void *cookie)
> {
> struct arm_smmu_domain *smmu_domain = cookie;
> __arm_smmu_tlb_sync(smmu_domain->smmu);
> }
> +#endif
>
> static void arm_smmu_tlb_inv_context(void *cookie)
> {
> @@ -1383,6 +1457,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
> __arm_smmu_tlb_sync(smmu);
> }
>
> +#if 0 /*Xen: Unused functionality */
> static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> size_t granule, bool leaf, void *cookie)
> {
> @@ -1427,6 +1502,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
> return false;
> }
> }
> +#endif
>
> static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
> {
> @@ -1474,6 +1550,7 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx)
> clear_bit(idx, map);
> }
>
> +#if 0
> static void arm_smmu_domain_free(struct iommu_domain *domain)
> {
> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> @@ -1502,7 +1579,23 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>
> kfree(smmu_domain);
> }
> +#endif
> +
> +static void arm_smmu_domain_free(struct iommu_domain *domain)
> +{
> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> + struct arm_smmu_device *smmu = smmu_domain->smmu;
> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> + /*
> + * Xen: Remove the free functions that are not used and code related
> + * to S1 translation. We just need to free the domain and vmid here.
> + */
Can you please give a reason to remove stage-1 code? This is not in the
spririt of a verbatim port and I still can't see why you can't keep it.
> + if (cfg->vmid)
> + arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
> + kfree(smmu_domain);
> +}
>
> +#if 0 /*Xen: The finalize domain functions are not needed in current form */
> static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
> struct io_pgtable_cfg *pgtbl_cfg)
> {
> @@ -1551,16 +1644,41 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
> cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
> return 0;
> }
> +#endif
> +
> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain)
> +{
> + int vmid;
> + struct arm_smmu_device *smmu = smmu_domain->smmu;
> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> +
> + vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
> + if (vmid < 0)
> + return vmid;
> +
> + /* Xen: Get the ttbr and vtcr values
/*
* Xen: ...
But why do you need to duplicate the function when you can just replace
the 2 lines that needs to be modified?
> + * vttbr: This is a shared value with the domain page table
> + * vtcr: The TCR settings are the same as CPU since he page
s/he/the/
> + * tables are shared
> + */
> +
> + cfg->vmid = vmid;
> + cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root);
> + cfg->vtcr = READ_SYSREG32(VTCR_EL2) & STRTAB_STE_2_VTCR_MASK;
I still think this is really fragile. You at least need a comment on the
other side (e.g where VTCR_EL2 is written) to explain you are relying
the value in other places.
> + return 0;
> +}
>
> static int arm_smmu_domain_finalise(struct iommu_domain *domain)
> {
> int ret;
> +#if 0 /* Xen: pgtbl_cfg not needed. So modify the function as needed */
> unsigned long ias, oas;
> enum io_pgtable_fmt fmt;
> struct io_pgtable_cfg pgtbl_cfg;
> struct io_pgtable_ops *pgtbl_ops;
> int (*finalise_stage_fn)(struct arm_smmu_domain *,
> struct io_pgtable_cfg *);
> +#endif
> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> struct arm_smmu_device *smmu = smmu_domain->smmu;
>
> @@ -1575,6 +1693,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
> if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
> smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
>
> +#if 0
> switch (smmu_domain->stage) {
> case ARM_SMMU_DOMAIN_S1:
> ias = VA_BITS;
> @@ -1616,7 +1735,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
> ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
> if (ret < 0)
> free_io_pgtable_ops(pgtbl_ops);
> +#endif
>
> + ret = arm_smmu_domain_finalise_s2(smmu_domain);
> return ret;
> }
>
> @@ -1709,7 +1830,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> ste->s1_cfg = &smmu_domain->s1_cfg;
> ste->s2_cfg = NULL;
> +#if 0 /*Xen: S1 configuratio not needed */
What would be the issue to let this code uncommented?
> arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
> +#endif
> } else {
> ste->s1_cfg = NULL;
> ste->s2_cfg = &smmu_domain->s2_cfg;
> @@ -1721,6 +1844,7 @@ out_unlock:
> return ret;
> }
> > +#if 0
/* Xen: ... */
> static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> phys_addr_t paddr, size_t size, int prot)
> {
> @@ -1772,6 +1896,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> put_device(dev);
> return dev ? dev_get_drvdata(dev) : NULL;
> }
> +#endif
>
> static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
> {
> @@ -1782,8 +1907,9 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>
> return sid < limit;
> }
> -
Please don't remove newline.
> +#if 0
> static struct iommu_ops arm_smmu_ops;
> +#endif
>
> static int arm_smmu_add_device(struct device *dev)
> {
> @@ -1791,9 +1917,12 @@ static int arm_smmu_add_device(struct device *dev)
> struct arm_smmu_device *smmu;
> struct arm_smmu_master_data *master;
> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +#if 0 /*Xen: iommu_group is not needed */
> struct iommu_group *group;
> +#endif
>
> - if (!fwspec || fwspec->ops != &arm_smmu_ops)
> + /* Xen: fwspec->ops are not needed */
> + if (!fwspec)
> return -ENODEV;
> /*
> * We _can_ actually withstand dodgy bus code re-calling add_device()
> @@ -1830,6 +1959,12 @@ static int arm_smmu_add_device(struct device *dev)
> }
> }
>
> +#if 0
> +/*
> + * Xen: Do not need an iommu group as the stream data is carried by the SMMU
> + * master device object
> + */
This is better to put before #if 0. So IDE will still show the comment
even when #if 0 is fold.
> +
> group = iommu_group_get_for_dev(dev);
> if (!IS_ERR(group)) {
> iommu_group_put(group);
> @@ -1837,8 +1972,16 @@ static int arm_smmu_add_device(struct device *dev)
> }
>
> return PTR_ERR_OR_ZERO(group);
> +#endif
> + return 0;
> }
>
> +/*
> + * Xen: We can potentially support this function and destroy a device. This
> + * will be relevant for PCI hotplug. So, will be implemented as needed after
> + * passthrough support is available.
> + */
> +#if 0
> static void arm_smmu_remove_device(struct device *dev)
> {
> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> @@ -1974,7 +2117,7 @@ static struct iommu_ops arm_smmu_ops = {
> .put_resv_regions = arm_smmu_put_resv_regions,
> .pgsize_bitmap = -1UL, /* Restricted during device attach */
> };
> -
Ditto for the newline. I know I didn't mention it in every place in the
previous series. But I would have expected you to apply my comments
everywhere.
> +#endif
> /* Probing and initialisation functions */
> static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
> struct arm_smmu_queue *q,
> @@ -1984,13 +2127,19 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
> {
> size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
>
> - q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
> + /* The SMMU cache coherency property is always set. Since we are sharing the CPU translation tables
/*
* ...
> + * just make a regular allocation.
I am not sure to understand it. AFAIU, q is for the command queue. So
how sharing the CPU translation tables will help here?
Furthermore, I don't understand how you can say cache coherency property
is always set? When I look at the driver, it seems to be able to handle
non-coherent memory. So where do you modify that?
> + */
> + q->base = _xzalloc(qsz, sizeof(void *));
> +
> if (!q->base) {
> dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
> qsz);
> return -ENOMEM;
> }
>
> + q->base_dma = virt_to_maddr(q->base);
> +
> q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
> q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
> q->ent_dwords = dwords;
> @@ -2056,6 +2205,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
> u64 reg;
> u32 size, l1size;
> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> + u32 alignment;
>
> /* Calculate the L1 size, capped to the SIDSIZE. */
> size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
> @@ -2069,14 +2219,17 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
> size, smmu->sid_bits);
>
> l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
> - strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
> - GFP_KERNEL | __GFP_ZERO);
> + alignment = max_t(u32, cfg->num_l1_ents, 64);
Same as before. I know I didn't go through the rest of the code. But you
could have at least applied my comments on alignment here too. E.g where
does the 64 come from?
But, it looks like to me you want to create a function
dmam_alloc_coherent that will do the allocation for you. This could be
used in a few places within file driver...
> + strtab = _xzalloc(l1size, l1size);
> +
> if (!strtab) {
> dev_err(smmu->dev,
> "failed to allocate l1 stream table (%u bytes)\n",
> size);
> return -ENOMEM;
> }
> +
> + cfg->strtab_dma = virt_to_maddr(strtab);
> cfg->strtab = strtab;
>
> /* Configure strtab_base_cfg for 2 levels */
> @@ -2098,14 +2251,16 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>
> size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
> - strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
> - GFP_KERNEL | __GFP_ZERO);
... such as here.
> + strtab = _xzalloc(size, size);
Hmmm, _xzalloc contains the following assert:
ASSERT((align & (align - 1)) == 0);
How are you sure the size will always honor this check?
> +
> if (!strtab) {
> dev_err(smmu->dev,
> "failed to allocate linear stream table (%u bytes)\n",
> size);
> return -ENOMEM;
> }
> +
> + cfg->strtab_dma = virt_to_maddr(strtab);
> cfg->strtab = strtab;
> cfg->num_l1_ents = 1 << smmu->sid_bits;
>
> @@ -2182,6 +2337,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
> 1, ARM_SMMU_POLL_TIMEOUT_US);
> }
>
> +#if 0 /* Xen: There is no MSI support as yet */
> static void arm_smmu_free_msis(void *data)
> {
> struct device *dev = data;
> @@ -2247,36 +2403,39 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
> /* Add callback to free MSIs on teardown */
> devm_add_action(dev, arm_smmu_free_msis, dev);
> }
> +#endif
>
> static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
> {
> int irq, ret;
>
> +#if 0 /*Xen: Cannot setup msis for now */
> arm_smmu_setup_msis(smmu);
> +#endif
>
> /* Request interrupt lines */
> irq = smmu->evtq.q.irq;
> if (irq) {
> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> - arm_smmu_evtq_thread,
> - IRQF_ONESHOT,
> - "arm-smmu-v3-evtq", smmu);
> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
Why do you need to set the IRQ type? Can't it be found from the firmware
tables?
> + ret = request_irq(irq, arm_smmu_evtq_thread,
> + 0, "arm-smmu-v3-evtq", smmu);
Please create a stub for devm_request_threaded_irq.
> if (ret < 0)
> dev_warn(smmu->dev, "failed to enable evtq irq\n");
> }
>
> irq = smmu->cmdq.q.irq;
> if (irq) {
> - ret = devm_request_irq(smmu->dev, irq,
> - arm_smmu_cmdq_sync_handler, 0,
> - "arm-smmu-v3-cmdq-sync", smmu);
> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
> + ret = request_irq(irq, arm_smmu_cmdq_sync_handler,
> + 0, "arm-smmu-v3-cmdq-sync", smmu);
Ditto.
> if (ret < 0)
> dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
> }
>
> irq = smmu->gerr_irq;
> if (irq) {
> - ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
> + ret = request_irq(irq, arm_smmu_gerror_handler,
> 0, "arm-smmu-v3-gerror", smmu);
Ditto.
> if (ret < 0)
> dev_warn(smmu->dev, "failed to enable gerror irq\n");
> @@ -2284,12 +2443,13 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>
> if (smmu->features & ARM_SMMU_FEAT_PRI) {
> irq = smmu->priq.q.irq;
> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
> if (irq) {
> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> - arm_smmu_priq_thread,
> - IRQF_ONESHOT,
> - "arm-smmu-v3-priq",
> - smmu);
> + ret = request_irq(irq,
> + arm_smmu_priq_thread,
> + 0,
> + "arm-smmu-v3-priq",
> + smmu);
Ditto.
> if (ret < 0)
> dev_warn(smmu->dev,
> "failed to enable priq irq\n");
> @@ -2316,11 +2476,11 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
> * Cavium ThunderX2 implementation doesn't not support unique
> * irq lines. Use single irq line for all the SMMUv3 interrupts.
> */
> - ret = devm_request_threaded_irq(smmu->dev, irq,
> - arm_smmu_combined_irq_handler,
> - arm_smmu_combined_irq_thread,
> - IRQF_ONESHOT,
> - "arm-smmu-v3-combined-irq", smmu);
> + ret = request_irq(irq,
> + arm_smmu_combined_irq_handler,
> + 0,
> + "arm-smmu-v3-combined-irq",
> + smmu);
Ditto. And here a good example where I a stub is good. You set the IRQ
type everywere but not for this one.
> if (ret < 0)
> dev_warn(smmu->dev, "failed to enable combined irq\n");
> } else
> @@ -2542,8 +2702,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
> smmu->features |= ARM_SMMU_FEAT_STALLS;
> }
>
> +#if 0/* Xen: Do not enable Stage 1 translations */
This is just saying stage-1 is available. So why do you care so much to
disable it? This is just adding more #if 0, we managed to get away in
SMMUv1 by leaving the code as it.
> +
> if (reg & IDR0_S1P)
> smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
> +#endif
>
> if (reg & IDR0_S2P)
> smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
> @@ -2616,10 +2779,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
> if (reg & IDR5_GRAN4K)
> smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
>
> +#if 0 /* Xen: SMMU ops do not have a pgsize_bitmap member for Xen */
> if (arm_smmu_ops.pgsize_bitmap == -1UL)
> arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
> else
> arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
> +#endif
>
> /* Output address size */
> switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
> @@ -2646,10 +2811,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
> smmu->oas = 48;
> }
>
> +#if 0 /* Xen: There is no support for DMA mask */
Stub it?
> /* Set the DMA mask for our table walker */
> if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> dev_warn(smmu->dev,
> "failed to set DMA mask for table walker\n");
> +#endif
>
> smmu->ias = max(smmu->ias, smmu->oas);
>
> @@ -2680,7 +2847,8 @@ static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
> struct device *dev = smmu->dev;
> struct acpi_iort_node *node;
>
> - node = *(struct acpi_iort_node **)dev_get_platdata(dev);
> + /* Xen: Modification to get iort_node */
> + node = (struct acpi_iort_node *)dev->acpi_node;
>
> /* Retrieve SMMUv3 specific data */
> iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
> @@ -2703,7 +2871,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
> static int arm_smmu_device_dt_probe(struct platform_device *pdev,
> struct arm_smmu_device *smmu)
> {
> - struct device *dev = &pdev->dev;
> + struct device *dev = pdev;
> u32 cells;
> int ret = -EINVAL;
>
> @@ -2716,8 +2884,8 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>
> parse_driver_options(smmu);
>
> - if (of_dma_is_coherent(dev->of_node))
> - smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> + /* Xen: Set the COHERNECY feature */
> + smmu->features |= ARM_SMMU_FEAT_COHERENCY;
This looks like completely wrong. You should only do it when the
firmware tables say it is fine.
>
> return ret;
> }
> @@ -2734,9 +2902,11 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> {
> int irq, ret;
> struct resource *res;
> +#if 0 /*Xen: Do not need to setup sysfs */
> resource_size_t ioaddr;
> +#endif
> struct arm_smmu_device *smmu;
> - struct device *dev = &pdev->dev;
> + struct device *dev = pdev;/* Xen: dev is ignored */
> bool bypass;
>
> smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> @@ -2763,8 +2933,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> dev_err(dev, "MMIO region too small (%pr)\n", res);
> return -EINVAL;
> }
> +#if 0 /*Xen: Do not need to setup sysfs */
> ioaddr = res->start;
> -
Again the newline.
> +#endif
> smmu->base = devm_ioremap_resource(dev, res);
> if (IS_ERR(smmu->base))
> return PTR_ERR(smmu->base);
> @@ -2802,13 +2973,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> return ret;
>
> /* Record our private device structure */
> +#if 0 /* Xen: SMMU is not treated a a platform device*/
> platform_set_drvdata(pdev, smmu);
> -
Again the newline.
> +#endif
> /* Reset the device */
> ret = arm_smmu_device_reset(smmu, bypass);
> if (ret)
> return ret;
>
> +/* Xen: Not creating an IOMMU device list for Xen */
> +#if 0
> /* And we're up. Go go go! */
> ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
> "smmu3.%pa", &ioaddr);
> @@ -2844,9 +3018,18 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> if (ret)
> return ret;
> }
> +#endif
> + /*
> + * Xen: Keep a list of all probed devices. This will be used to query
> + * the smmu devices based on the fwnode.
> + */
> + INIT_LIST_HEAD(&smmu->devices);
> + spin_lock(&arm_smmu_devices_lock);
> + list_add(&smmu->devices, &arm_smmu_devices);
> + spin_unlock(&arm_smmu_devices_lock);
> return 0;
> }
> -
Again the newline removed and /* Xen ... */
> +#if 0
> static int arm_smmu_device_remove(struct platform_device *pdev)
> {
> struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
> @@ -2860,6 +3043,10 @@ static void arm_smmu_device_shutdown(struct platform_device *pdev)
> {
> arm_smmu_device_remove(pdev);
> }
> +#endif
> +
> +#define MODULE_DEVICE_TABLE(type, name)
> +#define of_device_id dt_device_match
That should be define on top.
>
> static const struct of_device_id arm_smmu_of_match[] = {
> { .compatible = "arm,smmu-v3", },
> @@ -2867,6 +3054,7 @@ static const struct of_device_id arm_smmu_of_match[] = {
> };
> MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
>
> +#if 0
> static struct platform_driver arm_smmu_driver = {
> .driver = {
> .name = "arm-smmu-v3",
> @@ -2883,3 +3071,318 @@ IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
> MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
> MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
> MODULE_LICENSE("GPL v2");
> +#endif
> +
> +/***** Start of Xen specific code *****/
> +
> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
> +{
> + struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
> + struct iommu_domain *cfg;
> +
> + spin_lock(&smmu_domain->lock);
> + list_for_each_entry(cfg, &smmu_domain->iommu_domains, list) {
> + /*
> + * Only invalidate the context when SMMU is present.
> + * This is because the context initialization is delayed
> + * until a master has been added.
> + */
> + if (unlikely(!ACCESS_ONCE(cfg->priv->smmu)))
> + continue;
> + arm_smmu_tlb_inv_context(cfg->priv);
> + }
> + spin_unlock(&smmu_domain->lock);
> + return 0;
> +}
> +
> +static int __must_check arm_smmu_iotlb_flush(struct domain *d,
> + unsigned long gfn,
> + unsigned int page_count)
> +{
> + return arm_smmu_iotlb_flush_all(d);
> +}
> +
> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
> + struct device *dev)
> +{
> + struct iommu_domain *domain;
> + struct arm_smmu_xen_domain *xen_domain;
> + struct arm_smmu_device *smmu;
> + struct arm_smmu_domain *smmu_domain;
> +
> + xen_domain = dom_iommu(d)->arch.priv;
> +
> + smmu = arm_smmu_get_by_fwnode(dev->iommu_fwspec->iommu_fwnode);
> + if (!smmu)
> + return NULL;
> +
> + /*
> + * Loop through the &xen_domain->contexts to locate a context
> + * assigned to this SMMU
> + */
> + list_for_each_entry(domain, &xen_domain->iommu_domains, list) {
> + smmu_domain = to_smmu_domain(domain);
> + if (smmu_domain->smmu == smmu)
> + return domain;
> + }
> +
> + return NULL;
> +}
> +
> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *domain)
> +{
> + list_del(&domain->list);
> + arm_smmu_domain_free(domain);
> +}
> +
> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
> + struct device *dev, u32 flag)
> +{
> + int ret = 0;
> + struct iommu_domain *domain;
> + struct arm_smmu_xen_domain *xen_domain;
> + struct arm_smmu_domain *arm_smmu;
> +
> + xen_domain = dom_iommu(d)->arch.priv;
> +
> + if (!dev->archdata.iommu) {
> + dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
> + if (!dev->archdata.iommu)
> + return -ENOMEM;
> + }
> +
> + ret = arm_smmu_add_device(dev);
> + if (ret)
> + return ret;
> +
> + spin_lock(&xen_domain->lock);
> +
> + /*
> + * Check to see if an iommu_domain already exists for this xen domain
> + * under the same SMMU
> + */
> + domain = arm_smmu_get_domain(d, dev);
> + if (!domain) {
> +
> + domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
> + if (!domain) {
> + ret = -ENOMEM;
> + goto out;
> + }
> +
> + arm_smmu = to_smmu_domain(domain);
> + arm_smmu->s2_cfg.domain = d;
> +
> + /* Chain the new context to the domain */
> + list_add(&domain->list, &xen_domain->iommu_domains);
> +
> + }
> +
> + ret = arm_smmu_attach_dev(domain, dev);
> + if (ret) {
> + if (domain->ref.counter == 0)
> + arm_smmu_destroy_iommu_domain(domain);
> + } else {
> + atomic_inc(&domain->ref);
> + }
> +
> +out:
> + spin_unlock(&xen_domain->lock);
> + return ret;
> +}
> +
> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
> +{
> + struct iommu_domain *domain = arm_smmu_get_domain(d, dev);
> + struct arm_smmu_xen_domain *xen_domain;
> + struct arm_smmu_domain *arm_smmu = to_smmu_domain(domain);
> +
> + xen_domain = dom_iommu(d)->arch.priv;
> +
> + if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
> + dev_err(dev, " not attached to domain %d\n", d->domain_id);
> + return -ESRCH;
> + }
> +
> + spin_lock(&xen_domain->lock);
> +
> + arm_smmu_detach_dev(dev);
> + atomic_dec(&domain->ref);
> +
> + if (domain->ref.counter == 0)
> + arm_smmu_destroy_iommu_domain(domain);
> +
> + spin_unlock(&xen_domain->lock);
> +
> +
> +
> + return 0;
> +}
> +
> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
> + u8 devfn, struct device *dev)
> +{
> + int ret = 0;
> +
> + /* Don't allow remapping on other domain than hwdom */
> + if (t && t != hardware_domain)
> + return -EPERM;
> +
> + if (t == s)
> + return 0;
> +
> + ret = arm_smmu_deassign_dev(s, dev);
> + if (ret)
> + return ret;
> +
> + if (t) {
> + /* No flags are defined for ARM. */
> + ret = arm_smmu_assign_dev(t, devfn, dev, 0);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int arm_smmu_iommu_domain_init(struct domain *d)
> +{
> + struct arm_smmu_xen_domain *xen_domain;
> +
> + xen_domain = xzalloc(struct arm_smmu_xen_domain);
> + if (!xen_domain)
> + return -ENOMEM;
> +
> + spin_lock_init(&xen_domain->lock);
> + INIT_LIST_HEAD(&xen_domain->iommu_domains);
> +
> + dom_iommu(d)->arch.priv = xen_domain;
> +
> + return 0;
> +}
> +
> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
> +{
> +}
> +
> +static void arm_smmu_iommu_domain_teardown(struct domain *d)
> +{
> + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> + ASSERT(list_empty(&xen_domain->iommu_domains));
> + xfree(xen_domain);
> +}
> +
> +static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
> + unsigned long mfn, unsigned int flags)
> +{
> + p2m_type_t t;
> +
> + /*
> + * Grant mappings can be used for DMA requests. The dev_bus_addr
> + * returned by the hypercall is the MFN (not the IPA). For device
> + * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
> + * p2m to allow DMA request to work.
> + * This is only valid when the domain is directed mapped. Hence this
> + * function should only be used by gnttab code with gfn == mfn.
> + */
> + BUG_ON(!is_domain_direct_mapped(d));
> + BUG_ON(mfn != gfn);
> +
> + /* We only support readable and writable flags */
> + if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
> + return -EINVAL;
> +
> + t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
> +
> + /*
> + * The function guest_physmap_add_entry replaces the current mapping
> + * if there is already one...
> + */
> + return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
> +}
> +
> +static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
> +{
> + /*
> + * This function should only be used by gnttab code when the domain
> + * is direct mapped
> + */
> + if (!is_domain_direct_mapped(d))
> + return -EINVAL;
> +
> + return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
> +}
> +
> +static const struct iommu_ops arm_smmu_iommu_ops = {
> + .init = arm_smmu_iommu_domain_init,
> + .hwdom_init = arm_smmu_iommu_hwdom_init,
> + .teardown = arm_smmu_iommu_domain_teardown,
> + .iotlb_flush = arm_smmu_iotlb_flush,
> + .iotlb_flush_all = arm_smmu_iotlb_flush_all,
> + .assign_device = arm_smmu_assign_dev,
> + .reassign_device = arm_smmu_reassign_dev,
> + .map_page = arm_smmu_map_page,
> + .unmap_page = arm_smmu_unmap_page,
> +};
> +
> +static
> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> + struct arm_smmu_device *smmu = NULL;
> +
> + spin_lock(&arm_smmu_devices_lock);
> + list_for_each_entry(smmu, &arm_smmu_devices, devices) {
> + if (smmu->dev->fwnode == fwnode)
> + break;
> + }
> + spin_unlock(&arm_smmu_devices_lock);
> +
> + return smmu;
> +}
> +
> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
> + const void *data)
> +{
> + int rc;
> +
> + /*
> + * Even if the device can't be initialized, we don't want to
> + * give the SMMU device to dom0.
> + */
> + dt_device_set_used_by(dev, DOMID_XEN);
> +
> + rc = arm_smmu_device_probe(dt_to_dev(dev));
> + if (rc)
> + return rc;
> +
> + iommu_set_ops(&arm_smmu_iommu_ops);
> +
> + return 0;
> +}
> +
> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
> + .dt_match = arm_smmu_of_match,
> + .init = arm_smmu_dt_init,
> +DT_DEVICE_END
> +
> +#ifdef CONFIG_ACPI
> +/* Set up the IOMMU */
> +static int __init arm_smmu_acpi_init(const void *data)
> +{
> + int rc;
> + rc = arm_smmu_device_probe((struct device *)data);
> +
> + if (rc)
> + return rc;
> +
> + iommu_set_ops(&arm_smmu_iommu_ops);
> + return 0;
> +}
> +
> +ACPI_DEVICE_START(asmmuv3, "ARM SMMU V3", DEVICE_IOMMU)
> + .class_type = ACPI_IORT_NODE_SMMU_V3,
> + .init = arm_smmu_acpi_init,
> +ACPI_DEVICE_END
> +
> +#endif
>
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 14:17 ` Julien Grall
@ 2017-12-05 23:26 ` Goel, Sameer
2017-12-06 9:55 ` Julien Grall
2017-12-06 10:01 ` Julien Grall
2017-12-15 22:45 ` Goel, Sameer
2017-12-16 6:05 ` Goel, Sameer
2 siblings, 2 replies; 19+ messages in thread
From: Goel, Sameer @ 2017-12-05 23:26 UTC (permalink / raw)
To: Julien Grall, xen-devel, julien.grall, mjaggi; +Cc: sstabellini, shankerd
On 12/5/2017 7:17 AM, Julien Grall wrote:
> Hello,
>
> On 05/12/17 03:59, Sameer Goel wrote:
>> This driver follows an approach similar to smmu driver. The intent here
>> is to reuse as much Linux code as possible.
>> - Glue code has been introduced in headers to bridge the API calls.
>> - Called Linux functions from the Xen IOMMU function calls.
>> - Xen modifications are preceded by /*Xen: comment */
>> - New config items for SMMUv3 and legacy SMMU have been defined.
>
> There are no reason to touch legacy SMMU in this patch. Please move that outside of it.
Ok.
>
>>
>> Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
>> ---
>> xen/drivers/Kconfig | 2 +
>> xen/drivers/passthrough/arm/Kconfig | 14 +
>> xen/drivers/passthrough/arm/Makefile | 3 +-
>> xen/drivers/passthrough/arm/arm_smmu.h | 189 ++++++++++
>> xen/drivers/passthrough/arm/smmu-v3.c | 619 ++++++++++++++++++++++++++++++---
>> 5 files changed, 768 insertions(+), 59 deletions(-)
>> create mode 100644 xen/drivers/passthrough/arm/Kconfig
>> create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
>>
>> diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
>> index bc3a54f..6126553 100644
>> --- a/xen/drivers/Kconfig
>> +++ b/xen/drivers/Kconfig
>> @@ -12,4 +12,6 @@ source "drivers/pci/Kconfig"
>> source "drivers/video/Kconfig"
>> +source "drivers/passthrough/arm/Kconfig"
>> +
>> endmenu
>> diff --git a/xen/drivers/passthrough/arm/Kconfig b/xen/drivers/passthrough/arm/Kconfig
>> new file mode 100644
>> index 0000000..9ac4cea
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/Kconfig
>> @@ -0,0 +1,14 @@
>> +
>> +config ARM_SMMU
>> + bool "ARM SMMU v1/2 support"
>> + depends on ARM_64
>
> Why? SMMUv1 and SMMUv2 supports Arm 32-bit.
>
>> + help
>> + Support for implementations of the ARM System MMU architecture. (1/2)
>
> I am not sure to understand the (1/2) after the final point.
>
>> +
>> +config ARM_SMMU_v3
>> + bool "ARM SMMUv3 Support"
>> + depends on ARM_64
>> + help
>> + Support for implementations of the ARM System MMU architecture
>> + version 3.
>> +
>> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
>> index f4cd26e..5b3eb15 100644
>> --- a/xen/drivers/passthrough/arm/Makefile
>> +++ b/xen/drivers/passthrough/arm/Makefile
>> @@ -1,2 +1,3 @@
>> obj-y += iommu.o
>> -obj-y += smmu.o
>> +obj-$(CONFIG_ARM_SMMU) += smmu.o
>> +obj-$(CONFIG_ARM_SMMU_v3) += smmu-v3.o
>> diff --git a/xen/drivers/passthrough/arm/arm_smmu.h b/xen/drivers/passthrough/arm/arm_smmu.h
>> new file mode 100644
>> index 0000000..b5e161f
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/arm_smmu.h
>
> I don't think there are any value to use Linux coding style in this header. It contains Xen stubs.
>
> I would also have expected this new file to come in a separate patch with the modification associated in SMMUv2. This would make easier to see what could be common.
That makes sense. I was holding it back till I post the first actual patch and just wanted to put out the SMMUv3patches.
>
>> @@ -0,0 +1,189 @@
>> +/******************************************************************************
>> + * ./arm_smmu.h
>> + *
>> + * Common compatibility defines and data_structures for porting arm smmu
>> + * drivers from Linux.
>
> [...]
>
>> +static struct resource *platform_get_resource(struct platform_device *pdev,
>> + unsigned int type,
>> + unsigned int num)
>> +{
>> + /*
>> + * The resource is only used between 2 calls of platform_get_resource.
>> + * It's quite ugly but it's avoid to add too much code in the part
>> + * imported from Linux
>> + */
>> + static struct resource res;
>> + struct acpi_iort_node *iort_node;
>> + struct acpi_iort_smmu_v3 *node_smmu_data;
>> + int ret = 0;
>> +
>> + res.type = type;
>> +
>> + switch (type) {
>> + case IORESOURCE_MEM:
>> + if (pdev->type == DEV_ACPI) {
>> + ret = 1;
>> + iort_node = pdev->acpi_node;
>> + node_smmu_data =
>> + (struct acpi_iort_smmu_v3 *)iort_node->node_data;
>
> Above you say: "Common compatibility defines and data_structures for porting arm smmu driver from Linux". But this code is clearly SMMUv3.
>
It is. I will pull this in the SMMUv3 driver.
>> +
>> + if (node_smmu_data != NULL) {
>> + res.addr = node_smmu_data->base_address;
>> + res.size = SZ_128K;
>> + ret = 0;
>> + }
>> + } else {
>> + ret = dt_device_get_address(dev_to_dt(pdev), num,
>> + &res.addr, &res.size);
>> + }
>> +
>> + return ((ret) ? NULL : &res);
>> +
>> + case IORESOURCE_IRQ:
>> + /* ACPI case not implemented as there is no use case for it */
>> + ret = platform_get_irq(dev_to_dt(pdev), num);
>> +
>> + if (ret < 0)
>> + return NULL;
>> +
>> + res.addr = ret;
>> + res.size = 1;
>> +
>> + return &res;
>> +
>> + default:
>> + return NULL;
>> + }
>> +}
>> +
>> +static int platform_get_irq_byname(struct platform_device *pdev, const char *name)
>> +{
>> + const struct dt_property *dtprop;
>> + struct acpi_iort_node *iort_node;
>> + struct acpi_iort_smmu_v3 *node_smmu_data;
>> + int ret = 0;
>> +
>> + if (pdev->type == DEV_ACPI) {
>> + iort_node = pdev->acpi_node;
>> + node_smmu_data = (struct acpi_iort_smmu_v3 *)iort_node->node_data;
>
> Ditto.
>
>> +
>> + if (node_smmu_data != NULL) {
>> + if (!strcmp(name, "eventq"))
>> + ret = node_smmu_data->event_gsiv;
>> + else if (!strcmp(name, "priq"))
>> + ret = node_smmu_data->pri_gsiv;
>> + else if (!strcmp(name, "cmdq-sync"))
>> + ret = node_smmu_data->sync_gsiv;
>> + else if (!strcmp(name, "gerror"))
>> + ret = node_smmu_data->gerr_gsiv;
>> + else
>> + ret = -EINVAL;
>> + }
>> + } else {
>> + dtprop = dt_find_property(dev_to_dt(pdev), "interrupt-names", NULL);
>> + if (!dtprop)
>> + return -EINVAL;
>> +
>> + if (!dtprop->value)
>> + return -ENODATA;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +/* Xen: Stub out DMA domain related functions */
>
> I don't think 'Xen:' is necessary as this file contains Xen stubs.
Ok.
>
>> +#define iommu_get_dma_cookie(dom) 0
>> +#define iommu_put_dma_cookie(dom) 0
>> +
>> +static void __iomem *devm_ioremap_resource(struct device *dev,
>> + struct resource *res)
>> +{
>> + void __iomem *ptr;
>> +
>> + if (!res || res->type != IORESOURCE_MEM) {
>> + dev_err(dev, "Invalid resource\n");
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + ptr = ioremap_nocache(res->addr, res->size);
>> + if (!ptr) {
>> + dev_err(dev,
>> + "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
>> + res->addr, res->size);
>> + return ERR_PTR(-ENOMEM);
>> + }
>> +
>> + return ptr;
>> +}
>> +
>> +/* Xen: Dummy iommu_domain */
>> +struct iommu_domain {
>> + /* Runtime SMMU configuration for this iommu_domain */
>> + struct arm_smmu_domain *priv;
>> + unsigned int type;
>
> What are the values for type?
>
>> +
>> + atomic_t ref;
>> + /* Used to link iommu_domain contexts for a same domain.
>
> /*
> * Used ...
> */
>
>> + * There is at least one per-SMMU to used by the domain.
>> + */
>> + struct list_head list;
>> +};
>> +/* Xen: Domain type definitions. Not really needed for Xen, defining to port
>
> /*
> * Xen: ...
>
>> + * Linux code as-is
>> + */
>> +#define IOMMU_DOMAIN_UNMANAGED 0
>> +#define IOMMU_DOMAIN_DMA 1
>> +#define IOMMU_DOMAIN_IDENTITY 2
>> +
>> +/* Xen: Describes information required for a Xen domain */
>> +struct arm_smmu_xen_domain {
>> + spinlock_t lock;
>> + /* List of iommu domains associated to this domain */
>> + struct list_head iommu_domains;
>> +};
>> +
>> +/*
>> + * Xen: Information about each device stored in dev->archdata.iommu
>> + *
>> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
>> + * the SMMU).
>> + */
>> +struct arm_smmu_xen_device {
>> + struct iommu_domain *domain;
>> +};
>> +
>> +#endif /* __ARM_SMMU_H__ */
>
> Missing emacs magic.
>
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>> index e67ba6c..c6c1b99 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -18,28 +18,38 @@
>> * Author: Will Deacon <will.deacon@arm.com>
>> *
>> * This driver is powered by bad coffee and bombay mix.
>> + *
>> + *
>> + * Based on Linux drivers/iommu/arm-smmu-v3.c
>> + * => commit 7aa8619a66aea52b145e04cbab4f8d6a4e5f3f3b
>> + *
>> + * Xen modifications:
>> + * Sameer Goel <sameer.goel@linaro.org>
>> + * Copyright (C) 2017, The Linux Foundation, All rights reserved.
>> + *
>> */
>> -#include <linux/acpi.h>
>> -#include <linux/acpi_iort.h>
>> -#include <linux/delay.h>
>> -#include <linux/dma-iommu.h>
>> -#include <linux/err.h>
>> -#include <linux/interrupt.h>
>> -#include <linux/iommu.h>
>> -#include <linux/iopoll.h>
>> -#include <linux/module.h>
>> -#include <linux/msi.h>
>> -#include <linux/of.h>
>> -#include <linux/of_address.h>
>> -#include <linux/of_iommu.h>
>> -#include <linux/of_platform.h>
>> -#include <linux/pci.h>
>> -#include <linux/platform_device.h>
>> -
>> -#include <linux/amba/bus.h>
>> -
>> -#include "io-pgtable.h"
>> +#include <xen/acpi.h>
>> +#include <xen/config.h>
>> +#include <xen/delay.h>
>> +#include <xen/errno.h>
>> +#include <xen/err.h>
>> +#include <xen/irq.h>
>> +#include <xen/lib.h>
>> +#include <xen/linux_compat.h>
>> +#include <xen/list.h>
>> +#include <xen/mm.h>
>> +#include <xen/rbtree.h>
>> +#include <xen/sched.h>
>> +#include <xen/sizes.h>
>> +#include <xen/vmap.h>
>> +#include <acpi/acpi_iort.h>
>> +#include <asm/atomic.h>
>> +#include <asm/device.h>
>> +#include <asm/io.h>
>> +#include <asm/platform.h>
>> +
>> +#include "arm_smmu.h" /* Not a self contained header. So last in the list */
>> /* MMIO registers */
>> #define ARM_SMMU_IDR0 0x0
>> @@ -423,9 +433,12 @@
>> #endif
>> static bool disable_bypass;
>> +
>> +#if 0 /* Xen: Not applicable for Xen */
>> module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
>> MODULE_PARM_DESC(disable_bypass,
>> "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
>> +#endif
>
> Can't you stub module_param_namde and MODULE_PARM_DESC to avoid #if 0?
>
>> enum pri_resp {
>> PRI_RESP_DENY,
>> @@ -433,6 +446,7 @@ enum pri_resp {
>> PRI_RESP_SUCC,
>> };
>> +#if 0 /* Xen: No MSI support in this iteration */
>> enum arm_smmu_msi_index {
>> EVTQ_MSI_INDEX,
>> GERROR_MSI_INDEX,
>> @@ -457,6 +471,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
>> ARM_SMMU_PRIQ_IRQ_CFG2,
>> },
>> };
>> +#endif
>> struct arm_smmu_cmdq_ent {
>> /* Common fields */
>> @@ -561,6 +576,8 @@ struct arm_smmu_s2_cfg {
>> u16 vmid;
>> u64 vttbr;
>> u64 vtcr;
>> + /* Xen: Domain associated to this configuration */
>> + struct domain *domain;
>> };
>> struct arm_smmu_strtab_ent {
>> @@ -635,9 +652,21 @@ struct arm_smmu_device {
>> struct arm_smmu_strtab_cfg strtab_cfg;
>> /* IOMMU core code handle */
>> +#if 0 /*Xen: Generic iommu_device ref not needed here */
>> struct iommu_device iommu;
>> +#endif
>> + /* Xen: Need to keep a list of SMMU devices */
>> + struct list_head devices;
>> };
>> +/* Xen: Keep a list of devices associated with this driver */
>> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
>> +static LIST_HEAD(arm_smmu_devices);
>> +/* Xen: Helper for finding a device using fwnode */
>> +static
>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode);
>> +
>> +
>> /* SMMU private data for each master */
>> struct arm_smmu_master_data {
>> struct arm_smmu_device *smmu;
>> @@ -654,7 +683,7 @@ enum arm_smmu_domain_stage {
>> struct arm_smmu_domain {
>> struct arm_smmu_device *smmu;
>> - struct mutex init_mutex; /* Protects smmu pointer */
>> + mutex init_mutex; /* Protects smmu pointer */
>> struct io_pgtable_ops *pgtbl_ops;
>> @@ -961,6 +990,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>> spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>> }
>> +#if 0 /*Xen: Comment out functions that set up S1 translations */
>
> Why? I do agree that the code will not be used by Xen, but I would prefer if you minimize the number of #ifdef.
>
>> /* Context descriptor manipulation functions */
>> static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
>> {
>> @@ -1003,6 +1033,7 @@ static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
>> cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
>> }
>> +#endif
>> /* Stream table manipulation functions */
>> static void
>> @@ -1164,6 +1195,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>> void *strtab;
>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>> struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
>> + u32 alignment = 0;
>
> It is not necassary to initialize alignment. Also we are trying to limit the use of u* in favor of uint32_t.
>
>> if (desc->l2ptr)
>> return 0;
>> @@ -1172,14 +1204,16 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>> strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
>> desc->span = STRTAB_SPLIT + 1;
>> - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
>> - GFP_KERNEL | __GFP_ZERO);
>> + /* Alignment picked from ARM SMMU arch version 3.x. L1ST.L2Ptr */
>> + alignment = 1 << ((5 + (desc->span - 1)));
>> + desc->l2ptr = _xzalloc(size, alignment);
>> if (!desc->l2ptr) {
>> dev_err(smmu->dev,
>> "failed to allocate l2 stream table for SID %u\n",
>> sid);
>> return -ENOMEM;
>> }
>> + desc->l2ptr_dma = virt_to_maddr(desc->l2ptr);
>> arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
>> arm_smmu_write_strtab_l1_desc(strtab, desc);
>> @@ -1232,7 +1266,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>> dev_info(smmu->dev, "unexpected PRI request received:\n");
>> dev_info(smmu->dev,
>> - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
>> + "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova %#" PRIx64 "\n",
>> sid, ssid, grpid, last ? "L" : "",
>> evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
>> evt[0] & PRIQ_0_PERM_READ ? "R" : "",
>> @@ -1346,6 +1380,8 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
>> {
>> arm_smmu_gerror_handler(irq, dev);
>> arm_smmu_cmdq_sync_handler(irq, dev);
>> + /*Xen: No threaded irq. So call the required function from here */
>> + arm_smmu_combined_irq_thread(irq, dev);
>> return IRQ_WAKE_THREAD;
>> }
>> @@ -1358,11 +1394,49 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
>> arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> }
>> +static void arm_smmu_evtq_thread_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_evtq_thread(irq, dev);
>> +}
>> +
>> +static void arm_smmu_priq_thread_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_priq_thread(irq, dev);
>> +}
>> +
>> +static void arm_smmu_cmdq_sync_handler_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_cmdq_sync_handler(irq, dev);
>> +}
>> +
>> +static void arm_smmu_gerror_handler_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_gerror_handler(irq, dev);
>> +}
>> +
>> +static void arm_smmu_combined_irq_handler_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_combined_irq_handler(irq, dev);
>> +}
>> +
>
> Missing:
> /* Xen: .... */
>
>> +#define arm_smmu_evtq_thread arm_smmu_evtq_thread_xen
>> +#define arm_smmu_priq_thread arm_smmu_priq_thread_xen
>> +#define arm_smmu_cmdq_sync_handler arm_smmu_cmdq_sync_handler_xen
>> +#define arm_smmu_gerror_handler arm_smmu_gerror_handler_xen
>> +#define arm_smmu_combined_irq_handler arm_smmu_combined_irq_handler_xen
>> +
>> +#if 0 /*Xen: Unused function */
>> static void arm_smmu_tlb_sync(void *cookie)
>> {
>> struct arm_smmu_domain *smmu_domain = cookie;
>> __arm_smmu_tlb_sync(smmu_domain->smmu);
>> }
>> +#endif
>> static void arm_smmu_tlb_inv_context(void *cookie)
>> {
>> @@ -1383,6 +1457,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>> __arm_smmu_tlb_sync(smmu);
>> }
>> +#if 0 /*Xen: Unused functionality */
>> static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>> size_t granule, bool leaf, void *cookie)
>> {
>> @@ -1427,6 +1502,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
>> return false;
>> }
>> }
>> +#endif
>> static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>> {
>> @@ -1474,6 +1550,7 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx)
>> clear_bit(idx, map);
>> }
>> +#if 0
>> static void arm_smmu_domain_free(struct iommu_domain *domain)
>> {
>> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>> @@ -1502,7 +1579,23 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>> kfree(smmu_domain);
>> }
>> +#endif
>> +
>> +static void arm_smmu_domain_free(struct iommu_domain *domain)
>> +{
>> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>> + struct arm_smmu_device *smmu = smmu_domain->smmu;
>> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>> + /*
>> + * Xen: Remove the free functions that are not used and code related
>> + * to S1 translation. We just need to free the domain and vmid here.
>> + */
>
> Can you please give a reason to remove stage-1 code? This is not in the spririt of a verbatim port and I still can't see why you can't keep it.
I was just clearing it out as it was not used. I will put it back in.
>
>> + if (cfg->vmid)
>> + arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>> + kfree(smmu_domain);
>> +}
>> +#if 0 /*Xen: The finalize domain functions are not needed in current form */
>> static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>> struct io_pgtable_cfg *pgtbl_cfg)
>> {
>> @@ -1551,16 +1644,41 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>> cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
>> return 0;
>> }
>> +#endif
>> +
>> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain)
>> +{
>> + int vmid;
>> + struct arm_smmu_device *smmu = smmu_domain->smmu;
>> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>> +
>> + vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>> + if (vmid < 0)
>> + return vmid;
>> +
>> + /* Xen: Get the ttbr and vtcr values
>
> /*
> * Xen: ...
>
> But why do you need to duplicate the function when you can just replace the 2 lines that needs to be modified?
>
>> + * vttbr: This is a shared value with the domain page table
>> + * vtcr: The TCR settings are the same as CPU since he page
> s/he/the/
>
>> + * tables are shared
>> + */
>> +
>> + cfg->vmid = vmid;
>> + cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root);
>> + cfg->vtcr = READ_SYSREG32(VTCR_EL2) & STRTAB_STE_2_VTCR_MASK;
>
> I still think this is really fragile. You at least need a comment on the other side (e.g where VTCR_EL2 is written) to explain you are relying the value in other places.
I can add the comment.
>
>> + return 0;
>> +}
>> static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>> {
>> int ret;
>> +#if 0 /* Xen: pgtbl_cfg not needed. So modify the function as needed */
>> unsigned long ias, oas;
>> enum io_pgtable_fmt fmt;
>> struct io_pgtable_cfg pgtbl_cfg;
>> struct io_pgtable_ops *pgtbl_ops;
>> int (*finalise_stage_fn)(struct arm_smmu_domain *,
>> struct io_pgtable_cfg *);
>> +#endif
>> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>> struct arm_smmu_device *smmu = smmu_domain->smmu;
>> @@ -1575,6 +1693,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>> if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
>> smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
>> +#if 0
>> switch (smmu_domain->stage) {
>> case ARM_SMMU_DOMAIN_S1:
>> ias = VA_BITS;
>> @@ -1616,7 +1735,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>> ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
>> if (ret < 0)
>> free_io_pgtable_ops(pgtbl_ops);
>> +#endif
>> + ret = arm_smmu_domain_finalise_s2(smmu_domain);
>> return ret;
>> }
>> @@ -1709,7 +1830,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>> } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>> ste->s1_cfg = &smmu_domain->s1_cfg;
>> ste->s2_cfg = NULL;
>> +#if 0 /*Xen: S1 configuratio not needed */
>
> What would be the issue to let this code uncommented
>
>> arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
>> +#endif
>> } else {
>> ste->s1_cfg = NULL;
>> ste->s2_cfg = &smmu_domain->s2_cfg;
>> @@ -1721,6 +1844,7 @@ out_unlock:
>> return ret;
>> }
>> > +#if 0
>
> /* Xen: ... */
>
>> static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
>> phys_addr_t paddr, size_t size, int prot)
>> {
>> @@ -1772,6 +1896,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>> put_device(dev);
>> return dev ? dev_get_drvdata(dev) : NULL;
>> }
>> +#endif
>> static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>> {
>> @@ -1782,8 +1907,9 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>> return sid < limit;
>> }
>> -
>
> Please don't remove newline.
>
>> +#if 0
>> static struct iommu_ops arm_smmu_ops;
>> +#endif
>> static int arm_smmu_add_device(struct device *dev)
>> {
>> @@ -1791,9 +1917,12 @@ static int arm_smmu_add_device(struct device *dev)
>> struct arm_smmu_device *smmu;
>> struct arm_smmu_master_data *master;
>> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +#if 0 /*Xen: iommu_group is not needed */
>> struct iommu_group *group;
>> +#endif
>> - if (!fwspec || fwspec->ops != &arm_smmu_ops)
>> + /* Xen: fwspec->ops are not needed */
>> + if (!fwspec)
>> return -ENODEV;
>> /*
>> * We _can_ actually withstand dodgy bus code re-calling add_device()
>> @@ -1830,6 +1959,12 @@ static int arm_smmu_add_device(struct device *dev)
>> }
>> }
>> +#if 0
>> +/*
>> + * Xen: Do not need an iommu group as the stream data is carried by the SMMU
>> + * master device object
>> + */
>
> This is better to put before #if 0. So IDE will still show the comment even when #if 0 is fold.
>
>> +
>> group = iommu_group_get_for_dev(dev);
>> if (!IS_ERR(group)) {
>> iommu_group_put(group);
>> @@ -1837,8 +1972,16 @@ static int arm_smmu_add_device(struct device *dev)
>> }
>> return PTR_ERR_OR_ZERO(group);
>> +#endif
>> + return 0;
>> }
>> +/*
>> + * Xen: We can potentially support this function and destroy a device. This
>> + * will be relevant for PCI hotplug. So, will be implemented as needed after
>> + * passthrough support is available.
>> + */
>> +#if 0
>> static void arm_smmu_remove_device(struct device *dev)
>> {
>> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> @@ -1974,7 +2117,7 @@ static struct iommu_ops arm_smmu_ops = {
>> .put_resv_regions = arm_smmu_put_resv_regions,
>> .pgsize_bitmap = -1UL, /* Restricted during device attach */
>> };
>> -
>
> Ditto for the newline. I know I didn't mention it in every place in the previous series. But I would have expected you to apply my comments everywhere.
>
>> +#endif
>> /* Probing and initialisation functions */
>> static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>> struct arm_smmu_queue *q,
>> @@ -1984,13 +2127,19 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>> {
>> size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
>> - q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
>> + /* The SMMU cache coherency property is always set. Since we are sharing the CPU translation tables
>
> /*
> * ...
>
>> + * just make a regular allocation.
>
> I am not sure to understand it. AFAIU, q is for the command queue. So how sharing the CPU translation tables will help here?
>
> Furthermore, I don't understand how you can say cache coherency property is always set? When I look at the driver, it seems to be able to handle non-coherent memory. So where do you modify that?
>
>> + */
>> + q->base = _xzalloc(qsz, sizeof(void *));
>> +
>> if (!q->base) {
>> dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
>> qsz);
>> return -ENOMEM;
>> }
>> + q->base_dma = virt_to_maddr(q->base);
>> +
>> q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
>> q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
>> q->ent_dwords = dwords;
>> @@ -2056,6 +2205,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>> u64 reg;
>> u32 size, l1size;
>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>> + u32 alignment;
>> /* Calculate the L1 size, capped to the SIDSIZE. */
>> size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
>> @@ -2069,14 +2219,17 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>> size, smmu->sid_bits);
>> l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
>> - strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
>> - GFP_KERNEL | __GFP_ZERO);
>> + alignment = max_t(u32, cfg->num_l1_ents, 64);
>
> Same as before. I know I didn't go through the rest of the code. But you could have at least applied my comments on alignment here too. E.g where does the 64 come from?
>
> But, it looks like to me you want to create a function dmam_alloc_coherent that will do the allocation for you. This could be used in a few places within file driver...
dmam_alloc_coherent uses the allocation size as the alignment. This is not as per spec. But that being said I am fine replicating the code from Linux. That will make my life easier :).
>
>> + strtab = _xzalloc(l1size, l1size);
>> +
>> if (!strtab) {
>> dev_err(smmu->dev,
>> "failed to allocate l1 stream table (%u bytes)\n",
>> size);
>> return -ENOMEM;
>> }
>> +
>> + cfg->strtab_dma = virt_to_maddr(strtab);
>> cfg->strtab = strtab;
>> /* Configure strtab_base_cfg for 2 levels */
>> @@ -2098,14 +2251,16 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>> size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
>> - strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
>> - GFP_KERNEL | __GFP_ZERO);
>
> ... such as here.
>
>> + strtab = _xzalloc(size, size);
>
> Hmmm, _xzalloc contains the following assert:
>
> ASSERT((align & (align - 1)) == 0);
>
> How are you sure the size will always honor this check?
I can add another check or add a comment. Till now the size has passed this check.
>
>> +
>> if (!strtab) {
>> dev_err(smmu->dev,
>> "failed to allocate linear stream table (%u bytes)\n",
>> size);
>> return -ENOMEM;
>> }
>> +
>> + cfg->strtab_dma = virt_to_maddr(strtab);
>> cfg->strtab = strtab;
>> cfg->num_l1_ents = 1 << smmu->sid_bits;
>> @@ -2182,6 +2337,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
>> 1, ARM_SMMU_POLL_TIMEOUT_US);
>> }
>> +#if 0 /* Xen: There is no MSI support as yet */
>> static void arm_smmu_free_msis(void *data)
>> {
>> struct device *dev = data;
>> @@ -2247,36 +2403,39 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
>> /* Add callback to free MSIs on teardown */
>> devm_add_action(dev, arm_smmu_free_msis, dev);
>> }
>> +#endif
>> static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>> {
>> int irq, ret;
>> +#if 0 /*Xen: Cannot setup msis for now */
>> arm_smmu_setup_msis(smmu);
>> +#endif
>> /* Request interrupt lines */
>> irq = smmu->evtq.q.irq;
>> if (irq) {
>> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
>> - arm_smmu_evtq_thread,
>> - IRQF_ONESHOT,
>> - "arm-smmu-v3-evtq", smmu);
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>
> Why do you need to set the IRQ type? Can't it be found from the firmware tables?
>
>> + ret = request_irq(irq, arm_smmu_evtq_thread,
>> + 0, "arm-smmu-v3-evtq", smmu);
>
> Please create a stub for devm_request_threaded_irq.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable evtq irq\n");
>> }
>> irq = smmu->cmdq.q.irq;
>> if (irq) {
>> - ret = devm_request_irq(smmu->dev, irq,
>> - arm_smmu_cmdq_sync_handler, 0,
>> - "arm-smmu-v3-cmdq-sync", smmu);
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>> + ret = request_irq(irq, arm_smmu_cmdq_sync_handler,
>> + 0, "arm-smmu-v3-cmdq-sync", smmu);
>
> Ditto.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
>> }
>> irq = smmu->gerr_irq;
>> if (irq) {
>> - ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>> + ret = request_irq(irq, arm_smmu_gerror_handler,
>> 0, "arm-smmu-v3-gerror", smmu);
>
> Ditto.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable gerror irq\n");
>> @@ -2284,12 +2443,13 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>> if (smmu->features & ARM_SMMU_FEAT_PRI) {
>> irq = smmu->priq.q.irq;
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>> if (irq) {
>> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
>> - arm_smmu_priq_thread,
>> - IRQF_ONESHOT,
>> - "arm-smmu-v3-priq",
>> - smmu);
>> + ret = request_irq(irq,
>> + arm_smmu_priq_thread,
>> + 0,
>> + "arm-smmu-v3-priq",
>> + smmu);
>
> Ditto.
>
>> if (ret < 0)
>> dev_warn(smmu->dev,
>> "failed to enable priq irq\n");
>> @@ -2316,11 +2476,11 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
>> * Cavium ThunderX2 implementation doesn't not support unique
>> * irq lines. Use single irq line for all the SMMUv3 interrupts.
>> */
>> - ret = devm_request_threaded_irq(smmu->dev, irq,
>> - arm_smmu_combined_irq_handler,
>> - arm_smmu_combined_irq_thread,
>> - IRQF_ONESHOT,
>> - "arm-smmu-v3-combined-irq", smmu);
>> + ret = request_irq(irq,
>> + arm_smmu_combined_irq_handler,
>> + 0,
>> + "arm-smmu-v3-combined-irq",
>> + smmu);
>
> Ditto. And here a good example where I a stub is good. You set the IRQ type everywere but not for this one.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable combined irq\n");
>> } else
>> @@ -2542,8 +2702,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>> smmu->features |= ARM_SMMU_FEAT_STALLS;
>> }
>> +#if 0/* Xen: Do not enable Stage 1 translations */
>
> This is just saying stage-1 is available. So why do you care so much to disable it? This is just adding more #if 0, we managed to get away in SMMUv1 by leaving the code as it.
>
>> +
>> if (reg & IDR0_S1P)
>> smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
>> +#endif
>> if (reg & IDR0_S2P)
>> smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
>> @@ -2616,10 +2779,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>> if (reg & IDR5_GRAN4K)
>> smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
>> +#if 0 /* Xen: SMMU ops do not have a pgsize_bitmap member for Xen */
>> if (arm_smmu_ops.pgsize_bitmap == -1UL)
>> arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
>> else
>> arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
>> +#endif
>> /* Output address size */
>> switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
>> @@ -2646,10 +2811,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>> smmu->oas = 48;
>> }
>> +#if 0 /* Xen: There is no support for DMA mask */
>
> Stub it?
>
>> /* Set the DMA mask for our table walker */
>> if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
>> dev_warn(smmu->dev,
>> "failed to set DMA mask for table walker\n");
>> +#endif
>> smmu->ias = max(smmu->ias, smmu->oas);
>> @@ -2680,7 +2847,8 @@ static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>> struct device *dev = smmu->dev;
>> struct acpi_iort_node *node;
>> - node = *(struct acpi_iort_node **)dev_get_platdata(dev);
>> + /* Xen: Modification to get iort_node */
>> + node = (struct acpi_iort_node *)dev->acpi_node;
>> /* Retrieve SMMUv3 specific data */
>> iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
>> @@ -2703,7 +2871,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>> static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>> struct arm_smmu_device *smmu)
>> {
>> - struct device *dev = &pdev->dev;
>> + struct device *dev = pdev;
>> u32 cells;
>> int ret = -EINVAL;
>> @@ -2716,8 +2884,8 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>> parse_driver_options(smmu);
>> - if (of_dma_is_coherent(dev->of_node))
>> - smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>> + /* Xen: Set the COHERNECY feature */
>> + smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>
> This looks like completely wrong. You should only do it when the firmware tables say it is fine.
>
>> return ret;
>> }
>> @@ -2734,9 +2902,11 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> {
>> int irq, ret;
>> struct resource *res;
>> +#if 0 /*Xen: Do not need to setup sysfs */
>> resource_size_t ioaddr;
>> +#endif
>> struct arm_smmu_device *smmu;
>> - struct device *dev = &pdev->dev;
>> + struct device *dev = pdev;/* Xen: dev is ignored */
>> bool bypass;
>> smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
>> @@ -2763,8 +2933,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> dev_err(dev, "MMIO region too small (%pr)\n", res);
>> return -EINVAL;
>> }
>> +#if 0 /*Xen: Do not need to setup sysfs */
>> ioaddr = res->start;
>> -
>
> Again the newline.
>
>> +#endif
>> smmu->base = devm_ioremap_resource(dev, res);
>> if (IS_ERR(smmu->base))
>> return PTR_ERR(smmu->base);
>> @@ -2802,13 +2973,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> return ret;
>> /* Record our private device structure */
>> +#if 0 /* Xen: SMMU is not treated a a platform device*/
>> platform_set_drvdata(pdev, smmu);
>> -
>
> Again the newline.
>
>> +#endif
>> /* Reset the device */
>> ret = arm_smmu_device_reset(smmu, bypass);
>> if (ret)
>> return ret;
>> +/* Xen: Not creating an IOMMU device list for Xen */
>> +#if 0
>> /* And we're up. Go go go! */
>> ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
>> "smmu3.%pa", &ioaddr);
>> @@ -2844,9 +3018,18 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> if (ret)
>> return ret;
>> }
>> +#endif
>> + /*
>> + * Xen: Keep a list of all probed devices. This will be used to query
>> + * the smmu devices based on the fwnode.
>> + */
>> + INIT_LIST_HEAD(&smmu->devices);
>> + spin_lock(&arm_smmu_devices_lock);
>> + list_add(&smmu->devices, &arm_smmu_devices);
>> + spin_unlock(&arm_smmu_devices_lock);
>> return 0;
>> }
>> -
>
> Again the newline removed and /* Xen ... */
>> +#if 0
>> static int arm_smmu_device_remove(struct platform_device *pdev)
>> {
>> struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
>> @@ -2860,6 +3043,10 @@ static void arm_smmu_device_shutdown(struct platform_device *pdev)
>> {
>> arm_smmu_device_remove(pdev);
>> }
>> +#endif
>> +
>> +#define MODULE_DEVICE_TABLE(type, name)
>> +#define of_device_id dt_device_match
>
> That should be define on top.
>
>> static const struct of_device_id arm_smmu_of_match[] = {
>> { .compatible = "arm,smmu-v3", },
>> @@ -2867,6 +3054,7 @@ static const struct of_device_id arm_smmu_of_match[] = {
>> };
>> MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
>> +#if 0
>> static struct platform_driver arm_smmu_driver = {
>> .driver = {
>> .name = "arm-smmu-v3",
>> @@ -2883,3 +3071,318 @@ IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
>> MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
>> MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
>> MODULE_LICENSE("GPL v2");
>> +#endif
>> +
>> +/***** Start of Xen specific code *****/
>> +
>> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>> +{
>> + struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
>> + struct iommu_domain *cfg;
>> +
>> + spin_lock(&smmu_domain->lock);
>> + list_for_each_entry(cfg, &smmu_domain->iommu_domains, list) {
>> + /*
>> + * Only invalidate the context when SMMU is present.
>> + * This is because the context initialization is delayed
>> + * until a master has been added.
>> + */
>> + if (unlikely(!ACCESS_ONCE(cfg->priv->smmu)))
>> + continue;
>> + arm_smmu_tlb_inv_context(cfg->priv);
>> + }
>> + spin_unlock(&smmu_domain->lock);
>> + return 0;
>> +}
>> +
>> +static int __must_check arm_smmu_iotlb_flush(struct domain *d,
>> + unsigned long gfn,
>> + unsigned int page_count)
>> +{
>> + return arm_smmu_iotlb_flush_all(d);
>> +}
>> +
>> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
>> + struct device *dev)
>> +{
>> + struct iommu_domain *domain;
>> + struct arm_smmu_xen_domain *xen_domain;
>> + struct arm_smmu_device *smmu;
>> + struct arm_smmu_domain *smmu_domain;
>> +
>> + xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + smmu = arm_smmu_get_by_fwnode(dev->iommu_fwspec->iommu_fwnode);
>> + if (!smmu)
>> + return NULL;
>> +
>> + /*
>> + * Loop through the &xen_domain->contexts to locate a context
>> + * assigned to this SMMU
>> + */
>> + list_for_each_entry(domain, &xen_domain->iommu_domains, list) {
>> + smmu_domain = to_smmu_domain(domain);
>> + if (smmu_domain->smmu == smmu)
>> + return domain;
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *domain)
>> +{
>> + list_del(&domain->list);
>> + arm_smmu_domain_free(domain);
>> +}
>> +
>> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
>> + struct device *dev, u32 flag)
>> +{
>> + int ret = 0;
>> + struct iommu_domain *domain;
>> + struct arm_smmu_xen_domain *xen_domain;
>> + struct arm_smmu_domain *arm_smmu;
>> +
>> + xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + if (!dev->archdata.iommu) {
>> + dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
>> + if (!dev->archdata.iommu)
>> + return -ENOMEM;
>> + }
>> +
>> + ret = arm_smmu_add_device(dev);
>> + if (ret)
>> + return ret;
>> +
>> + spin_lock(&xen_domain->lock);
>> +
>> + /*
>> + * Check to see if an iommu_domain already exists for this xen domain
>> + * under the same SMMU
>> + */
>> + domain = arm_smmu_get_domain(d, dev);
>> + if (!domain) {
>> +
>> + domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
>> + if (!domain) {
>> + ret = -ENOMEM;
>> + goto out;
>> + }
>> +
>> + arm_smmu = to_smmu_domain(domain);
>> + arm_smmu->s2_cfg.domain = d;
>> +
>> + /* Chain the new context to the domain */
>> + list_add(&domain->list, &xen_domain->iommu_domains);
>> +
>> + }
>> +
>> + ret = arm_smmu_attach_dev(domain, dev);
>> + if (ret) {
>> + if (domain->ref.counter == 0)
>> + arm_smmu_destroy_iommu_domain(domain);
>> + } else {
>> + atomic_inc(&domain->ref);
>> + }
>> +
>> +out:
>> + spin_unlock(&xen_domain->lock);
>> + return ret;
>> +}
>> +
>> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
>> +{
>> + struct iommu_domain *domain = arm_smmu_get_domain(d, dev);
>> + struct arm_smmu_xen_domain *xen_domain;
>> + struct arm_smmu_domain *arm_smmu = to_smmu_domain(domain);
>> +
>> + xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
>> + dev_err(dev, " not attached to domain %d\n", d->domain_id);
>> + return -ESRCH;
>> + }
>> +
>> + spin_lock(&xen_domain->lock);
>> +
>> + arm_smmu_detach_dev(dev);
>> + atomic_dec(&domain->ref);
>> +
>> + if (domain->ref.counter == 0)
>> + arm_smmu_destroy_iommu_domain(domain);
>> +
>> + spin_unlock(&xen_domain->lock);
>> +
>> +
>> +
>> + return 0;
>> +}
>> +
>> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
>> + u8 devfn, struct device *dev)
>> +{
>> + int ret = 0;
>> +
>> + /* Don't allow remapping on other domain than hwdom */
>> + if (t && t != hardware_domain)
>> + return -EPERM;
>> +
>> + if (t == s)
>> + return 0;
>> +
>> + ret = arm_smmu_deassign_dev(s, dev);
>> + if (ret)
>> + return ret;
>> +
>> + if (t) {
>> + /* No flags are defined for ARM. */
>> + ret = arm_smmu_assign_dev(t, devfn, dev, 0);
>> + if (ret)
>> + return ret;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int arm_smmu_iommu_domain_init(struct domain *d)
>> +{
>> + struct arm_smmu_xen_domain *xen_domain;
>> +
>> + xen_domain = xzalloc(struct arm_smmu_xen_domain);
>> + if (!xen_domain)
>> + return -ENOMEM;
>> +
>> + spin_lock_init(&xen_domain->lock);
>> + INIT_LIST_HEAD(&xen_domain->iommu_domains);
>> +
>> + dom_iommu(d)->arch.priv = xen_domain;
>> +
>> + return 0;
>> +}
>> +
>> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
>> +{
>> +}
>> +
>> +static void arm_smmu_iommu_domain_teardown(struct domain *d)
>> +{
>> + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + ASSERT(list_empty(&xen_domain->iommu_domains));
>> + xfree(xen_domain);
>> +}
>> +
>> +static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
>> + unsigned long mfn, unsigned int flags)
>> +{
>> + p2m_type_t t;
>> +
>> + /*
>> + * Grant mappings can be used for DMA requests. The dev_bus_addr
>> + * returned by the hypercall is the MFN (not the IPA). For device
>> + * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
>> + * p2m to allow DMA request to work.
>> + * This is only valid when the domain is directed mapped. Hence this
>> + * function should only be used by gnttab code with gfn == mfn.
>> + */
>> + BUG_ON(!is_domain_direct_mapped(d));
>> + BUG_ON(mfn != gfn);
>> +
>> + /* We only support readable and writable flags */
>> + if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
>> + return -EINVAL;
>> +
>> + t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
>> +
>> + /*
>> + * The function guest_physmap_add_entry replaces the current mapping
>> + * if there is already one...
>> + */
>> + return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
>> +}
>> +
>> +static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
>> +{
>> + /*
>> + * This function should only be used by gnttab code when the domain
>> + * is direct mapped
>> + */
>> + if (!is_domain_direct_mapped(d))
>> + return -EINVAL;
>> +
>> + return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
>> +}
>> +
>> +static const struct iommu_ops arm_smmu_iommu_ops = {
>> + .init = arm_smmu_iommu_domain_init,
>> + .hwdom_init = arm_smmu_iommu_hwdom_init,
>> + .teardown = arm_smmu_iommu_domain_teardown,
>> + .iotlb_flush = arm_smmu_iotlb_flush,
>> + .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>> + .assign_device = arm_smmu_assign_dev,
>> + .reassign_device = arm_smmu_reassign_dev,
>> + .map_page = arm_smmu_map_page,
>> + .unmap_page = arm_smmu_unmap_page,
>> +};
>> +
>> +static
>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>> +{
>> + struct arm_smmu_device *smmu = NULL;
>> +
>> + spin_lock(&arm_smmu_devices_lock);
>> + list_for_each_entry(smmu, &arm_smmu_devices, devices) {
>> + if (smmu->dev->fwnode == fwnode)
>> + break;
>> + }
>> + spin_unlock(&arm_smmu_devices_lock);
>> +
>> + return smmu;
>> +}
>> +
>> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>> + const void *data)
>> +{
>> + int rc;
>> +
>> + /*
>> + * Even if the device can't be initialized, we don't want to
>> + * give the SMMU device to dom0.
>> + */
>> + dt_device_set_used_by(dev, DOMID_XEN);
>> +
>> + rc = arm_smmu_device_probe(dt_to_dev(dev));
>> + if (rc)
>> + return rc;
>> +
>> + iommu_set_ops(&arm_smmu_iommu_ops);
>> +
>> + return 0;
>> +}
>> +
>> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>> + .dt_match = arm_smmu_of_match,
>> + .init = arm_smmu_dt_init,
>> +DT_DEVICE_END
>> +
>> +#ifdef CONFIG_ACPI
>> +/* Set up the IOMMU */
>> +static int __init arm_smmu_acpi_init(const void *data)
>> +{
>> + int rc;
>> + rc = arm_smmu_device_probe((struct device *)data);
>> +
>> + if (rc)
>> + return rc;
>> +
>> + iommu_set_ops(&arm_smmu_iommu_ops);
>> + return 0;
>> +}
>> +
>> +ACPI_DEVICE_START(asmmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>> + .class_type = ACPI_IORT_NODE_SMMU_V3,
>> + .init = arm_smmu_acpi_init,
>> +ACPI_DEVICE_END
>> +
>> +#endif
>>
> Cheers,
>
I'll fix the newlines as needed. I thought I had got them all but it seems a few were still missed.
Thanks,
Sameer
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 23:26 ` Goel, Sameer
@ 2017-12-06 9:55 ` Julien Grall
2017-12-06 10:01 ` Julien Grall
1 sibling, 0 replies; 19+ messages in thread
From: Julien Grall @ 2017-12-06 9:55 UTC (permalink / raw)
To: Goel, Sameer, xen-devel, julien.grall, mjaggi; +Cc: sstabellini, shankerd
On 12/05/2017 11:26 PM, Goel, Sameer wrote:
>
>
> On 12/5/2017 7:17 AM, Julien Grall wrote:
>> Hello,
>>
>> On 05/12/17 03:59, Sameer Goel wrote:
>>> This driver follows an approach similar to smmu driver. The intent here
>>> is to reuse as much Linux code as possible.
>>> - Glue code has been introduced in headers to bridge the API calls.
>>> - Called Linux functions from the Xen IOMMU function calls.
>>> - Xen modifications are preceded by /*Xen: comment */
>>> - New config items for SMMUv3 and legacy SMMU have been defined.
>>
>> There are no reason to touch legacy SMMU in this patch. Please move that outside of it.
> Ok.
>>
>>>
>>> Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
>>> ---
>>> xen/drivers/Kconfig | 2 +
>>> xen/drivers/passthrough/arm/Kconfig | 14 +
>>> xen/drivers/passthrough/arm/Makefile | 3 +-
>>> xen/drivers/passthrough/arm/arm_smmu.h | 189 ++++++++++
>>> xen/drivers/passthrough/arm/smmu-v3.c | 619 ++++++++++++++++++++++++++++++---
>>> 5 files changed, 768 insertions(+), 59 deletions(-)
>>> create mode 100644 xen/drivers/passthrough/arm/Kconfig
>>> create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
>>>
>>> diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
>>> index bc3a54f..6126553 100644
>>> --- a/xen/drivers/Kconfig
>>> +++ b/xen/drivers/Kconfig
>>> @@ -12,4 +12,6 @@ source "drivers/pci/Kconfig"
>>> source "drivers/video/Kconfig"
>>> +source "drivers/passthrough/arm/Kconfig"
>>> +
>>> endmenu
>>> diff --git a/xen/drivers/passthrough/arm/Kconfig b/xen/drivers/passthrough/arm/Kconfig
>>> new file mode 100644
>>> index 0000000..9ac4cea
>>> --- /dev/null
>>> +++ b/xen/drivers/passthrough/arm/Kconfig
>>> @@ -0,0 +1,14 @@
>>> +
>>> +config ARM_SMMU
>>> + bool "ARM SMMU v1/2 support"
>>> + depends on ARM_64
>>
>> Why? SMMUv1 and SMMUv2 supports Arm 32-bit.
>>
>>> + help
>>> + Support for implementations of the ARM System MMU architecture. (1/2)
>>
>> I am not sure to understand the (1/2) after the final point.
>>
>>> +
>>> +config ARM_SMMU_v3
>>> + bool "ARM SMMUv3 Support"
>>> + depends on ARM_64
>>> + help
>>> + Support for implementations of the ARM System MMU architecture
>>> + version 3.
>>> +
>>> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
>>> index f4cd26e..5b3eb15 100644
>>> --- a/xen/drivers/passthrough/arm/Makefile
>>> +++ b/xen/drivers/passthrough/arm/Makefile
>>> @@ -1,2 +1,3 @@
>>> obj-y += iommu.o
>>> -obj-y += smmu.o
>>> +obj-$(CONFIG_ARM_SMMU) += smmu.o
>>> +obj-$(CONFIG_ARM_SMMU_v3) += smmu-v3.o
>>> diff --git a/xen/drivers/passthrough/arm/arm_smmu.h b/xen/drivers/passthrough/arm/arm_smmu.h
>>> new file mode 100644
>>> index 0000000..b5e161f
>>> --- /dev/null
>>> +++ b/xen/drivers/passthrough/arm/arm_smmu.h
>>
>> I don't think there are any value to use Linux coding style in this header. It contains Xen stubs.
>>
>> I would also have expected this new file to come in a separate patch with the modification associated in SMMUv2. This would make easier to see what could be common.
> That makes sense. I was holding it back till I post the first actual patch and just wanted to put out the SMMUv3patches.
>
>>
>>> @@ -0,0 +1,189 @@
>>> +/******************************************************************************
>>> + * ./arm_smmu.h
>>> + *
>>> + * Common compatibility defines and data_structures for porting arm smmu
>>> + * drivers from Linux.
>>
>> [...]
>>
>>> +static struct resource *platform_get_resource(struct platform_device *pdev,
>>> + unsigned int type,
>>> + unsigned int num)
>>> +{
>>> + /*
>>> + * The resource is only used between 2 calls of platform_get_resource.
>>> + * It's quite ugly but it's avoid to add too much code in the part
>>> + * imported from Linux
>>> + */
>>> + static struct resource res;
>>> + struct acpi_iort_node *iort_node;
>>> + struct acpi_iort_smmu_v3 *node_smmu_data;
>>> + int ret = 0;
>>> +
>>> + res.type = type;
>>> +
>>> + switch (type) {
>>> + case IORESOURCE_MEM:
>>> + if (pdev->type == DEV_ACPI) {
>>> + ret = 1;
>>> + iort_node = pdev->acpi_node;
>>> + node_smmu_data =
>>> + (struct acpi_iort_smmu_v3 *)iort_node->node_data;
>>
>> Above you say: "Common compatibility defines and data_structures for porting arm smmu driver from Linux". But this code is clearly SMMUv3.
>>
> It is. I will pull this in the SMMUv3 driver.
>>> +
>>> + if (node_smmu_data != NULL) {
>>> + res.addr = node_smmu_data->base_address;
>>> + res.size = SZ_128K;
>>> + ret = 0;
>>> + }
>>> + } else {
>>> + ret = dt_device_get_address(dev_to_dt(pdev), num,
>>> + &res.addr, &res.size);
>>> + }
>>> +
>>> + return ((ret) ? NULL : &res);
>>> +
>>> + case IORESOURCE_IRQ:
>>> + /* ACPI case not implemented as there is no use case for it */
>>> + ret = platform_get_irq(dev_to_dt(pdev), num);
>>> +
>>> + if (ret < 0)
>>> + return NULL;
>>> +
>>> + res.addr = ret;
>>> + res.size = 1;
>>> +
>>> + return &res;
>>> +
>>> + default:
>>> + return NULL;
>>> + }
>>> +}
>>> +
>>> +static int platform_get_irq_byname(struct platform_device *pdev, const char *name)
>>> +{
>>> + const struct dt_property *dtprop;
>>> + struct acpi_iort_node *iort_node;
>>> + struct acpi_iort_smmu_v3 *node_smmu_data;
>>> + int ret = 0;
>>> +
>>> + if (pdev->type == DEV_ACPI) {
>>> + iort_node = pdev->acpi_node;
>>> + node_smmu_data = (struct acpi_iort_smmu_v3 *)iort_node->node_data;
>>
>> Ditto.
>>
>>> +
>>> + if (node_smmu_data != NULL) {
>>> + if (!strcmp(name, "eventq"))
>>> + ret = node_smmu_data->event_gsiv;
>>> + else if (!strcmp(name, "priq"))
>>> + ret = node_smmu_data->pri_gsiv;
>>> + else if (!strcmp(name, "cmdq-sync"))
>>> + ret = node_smmu_data->sync_gsiv;
>>> + else if (!strcmp(name, "gerror"))
>>> + ret = node_smmu_data->gerr_gsiv;
>>> + else
>>> + ret = -EINVAL;
>>> + }
>>> + } else {
>>> + dtprop = dt_find_property(dev_to_dt(pdev), "interrupt-names", NULL);
>>> + if (!dtprop)
>>> + return -EINVAL;
>>> +
>>> + if (!dtprop->value)
>>> + return -ENODATA;
>>> + }
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +/* Xen: Stub out DMA domain related functions */
>>
>> I don't think 'Xen:' is necessary as this file contains Xen stubs.
> Ok.
>>
>>> +#define iommu_get_dma_cookie(dom) 0
>>> +#define iommu_put_dma_cookie(dom) 0
>>> +
>>> +static void __iomem *devm_ioremap_resource(struct device *dev,
>>> + struct resource *res)
>>> +{
>>> + void __iomem *ptr;
>>> +
>>> + if (!res || res->type != IORESOURCE_MEM) {
>>> + dev_err(dev, "Invalid resource\n");
>>> + return ERR_PTR(-EINVAL);
>>> + }
>>> +
>>> + ptr = ioremap_nocache(res->addr, res->size);
>>> + if (!ptr) {
>>> + dev_err(dev,
>>> + "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
>>> + res->addr, res->size);
>>> + return ERR_PTR(-ENOMEM);
>>> + }
>>> +
>>> + return ptr;
>>> +}
>>> +
>>> +/* Xen: Dummy iommu_domain */
>>> +struct iommu_domain {
>>> + /* Runtime SMMU configuration for this iommu_domain */
>>> + struct arm_smmu_domain *priv;
>>> + unsigned int type;
>>
>> What are the values for type?
>>
>>> +
>>> + atomic_t ref;
>>> + /* Used to link iommu_domain contexts for a same domain.
>>
>> /*
>> * Used ...
>> */
>>
>>> + * There is at least one per-SMMU to used by the domain.
>>> + */
>>> + struct list_head list;
>>> +};
>>> +/* Xen: Domain type definitions. Not really needed for Xen, defining to port
>>
>> /*
>> * Xen: ...
>>
>>> + * Linux code as-is
>>> + */
>>> +#define IOMMU_DOMAIN_UNMANAGED 0
>>> +#define IOMMU_DOMAIN_DMA 1
>>> +#define IOMMU_DOMAIN_IDENTITY 2
>>> +
>>> +/* Xen: Describes information required for a Xen domain */
>>> +struct arm_smmu_xen_domain {
>>> + spinlock_t lock;
>>> + /* List of iommu domains associated to this domain */
>>> + struct list_head iommu_domains;
>>> +};
>>> +
>>> +/*
>>> + * Xen: Information about each device stored in dev->archdata.iommu
>>> + *
>>> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
>>> + * the SMMU).
>>> + */
>>> +struct arm_smmu_xen_device {
>>> + struct iommu_domain *domain;
>>> +};
>>> +
>>> +#endif /* __ARM_SMMU_H__ */
>>
>> Missing emacs magic.
>>
>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>>> index e67ba6c..c6c1b99 100644
>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>> @@ -18,28 +18,38 @@
>>> * Author: Will Deacon <will.deacon@arm.com>
>>> *
>>> * This driver is powered by bad coffee and bombay mix.
>>> + *
>>> + *
>>> + * Based on Linux drivers/iommu/arm-smmu-v3.c
>>> + * => commit 7aa8619a66aea52b145e04cbab4f8d6a4e5f3f3b
>>> + *
>>> + * Xen modifications:
>>> + * Sameer Goel <sameer.goel@linaro.org>
>>> + * Copyright (C) 2017, The Linux Foundation, All rights reserved.
>>> + *
>>> */
>>> -#include <linux/acpi.h>
>>> -#include <linux/acpi_iort.h>
>>> -#include <linux/delay.h>
>>> -#include <linux/dma-iommu.h>
>>> -#include <linux/err.h>
>>> -#include <linux/interrupt.h>
>>> -#include <linux/iommu.h>
>>> -#include <linux/iopoll.h>
>>> -#include <linux/module.h>
>>> -#include <linux/msi.h>
>>> -#include <linux/of.h>
>>> -#include <linux/of_address.h>
>>> -#include <linux/of_iommu.h>
>>> -#include <linux/of_platform.h>
>>> -#include <linux/pci.h>
>>> -#include <linux/platform_device.h>
>>> -
>>> -#include <linux/amba/bus.h>
>>> -
>>> -#include "io-pgtable.h"
>>> +#include <xen/acpi.h>
>>> +#include <xen/config.h>
>>> +#include <xen/delay.h>
>>> +#include <xen/errno.h>
>>> +#include <xen/err.h>
>>> +#include <xen/irq.h>
>>> +#include <xen/lib.h>
>>> +#include <xen/linux_compat.h>
>>> +#include <xen/list.h>
>>> +#include <xen/mm.h>
>>> +#include <xen/rbtree.h>
>>> +#include <xen/sched.h>
>>> +#include <xen/sizes.h>
>>> +#include <xen/vmap.h>
>>> +#include <acpi/acpi_iort.h>
>>> +#include <asm/atomic.h>
>>> +#include <asm/device.h>
>>> +#include <asm/io.h>
>>> +#include <asm/platform.h>
>>> +
>>> +#include "arm_smmu.h" /* Not a self contained header. So last in the list */
>>> /* MMIO registers */
>>> #define ARM_SMMU_IDR0 0x0
>>> @@ -423,9 +433,12 @@
>>> #endif
>>> static bool disable_bypass;
>>> +
>>> +#if 0 /* Xen: Not applicable for Xen */
>>> module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
>>> MODULE_PARM_DESC(disable_bypass,
>>> "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
>>> +#endif
>>
>> Can't you stub module_param_namde and MODULE_PARM_DESC to avoid #if 0?
>>
>>> enum pri_resp {
>>> PRI_RESP_DENY,
>>> @@ -433,6 +446,7 @@ enum pri_resp {
>>> PRI_RESP_SUCC,
>>> };
>>> +#if 0 /* Xen: No MSI support in this iteration */
>>> enum arm_smmu_msi_index {
>>> EVTQ_MSI_INDEX,
>>> GERROR_MSI_INDEX,
>>> @@ -457,6 +471,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
>>> ARM_SMMU_PRIQ_IRQ_CFG2,
>>> },
>>> };
>>> +#endif
>>> struct arm_smmu_cmdq_ent {
>>> /* Common fields */
>>> @@ -561,6 +576,8 @@ struct arm_smmu_s2_cfg {
>>> u16 vmid;
>>> u64 vttbr;
>>> u64 vtcr;
>>> + /* Xen: Domain associated to this configuration */
>>> + struct domain *domain;
>>> };
>>> struct arm_smmu_strtab_ent {
>>> @@ -635,9 +652,21 @@ struct arm_smmu_device {
>>> struct arm_smmu_strtab_cfg strtab_cfg;
>>> /* IOMMU core code handle */
>>> +#if 0 /*Xen: Generic iommu_device ref not needed here */
>>> struct iommu_device iommu;
>>> +#endif
>>> + /* Xen: Need to keep a list of SMMU devices */
>>> + struct list_head devices;
>>> };
>>> +/* Xen: Keep a list of devices associated with this driver */
>>> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
>>> +static LIST_HEAD(arm_smmu_devices);
>>> +/* Xen: Helper for finding a device using fwnode */
>>> +static
>>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode);
>>> +
>>> +
>>> /* SMMU private data for each master */
>>> struct arm_smmu_master_data {
>>> struct arm_smmu_device *smmu;
>>> @@ -654,7 +683,7 @@ enum arm_smmu_domain_stage {
>>> struct arm_smmu_domain {
>>> struct arm_smmu_device *smmu;
>>> - struct mutex init_mutex; /* Protects smmu pointer */
>>> + mutex init_mutex; /* Protects smmu pointer */
>>> struct io_pgtable_ops *pgtbl_ops;
>>> @@ -961,6 +990,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>>> spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>>> }
>>> +#if 0 /*Xen: Comment out functions that set up S1 translations */
>>
>> Why? I do agree that the code will not be used by Xen, but I would prefer if you minimize the number of #ifdef.
>>
>>> /* Context descriptor manipulation functions */
>>> static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
>>> {
>>> @@ -1003,6 +1033,7 @@ static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
>>> cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
>>> }
>>> +#endif
>>> /* Stream table manipulation functions */
>>> static void
>>> @@ -1164,6 +1195,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>>> void *strtab;
>>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>>> struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
>>> + u32 alignment = 0;
>>
>> It is not necassary to initialize alignment. Also we are trying to limit the use of u* in favor of uint32_t.
>>
>>> if (desc->l2ptr)
>>> return 0;
>>> @@ -1172,14 +1204,16 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>>> strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
>>> desc->span = STRTAB_SPLIT + 1;
>>> - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
>>> - GFP_KERNEL | __GFP_ZERO);
>>> + /* Alignment picked from ARM SMMU arch version 3.x. L1ST.L2Ptr */
>>> + alignment = 1 << ((5 + (desc->span - 1)));
>>> + desc->l2ptr = _xzalloc(size, alignment);
>>> if (!desc->l2ptr) {
>>> dev_err(smmu->dev,
>>> "failed to allocate l2 stream table for SID %u\n",
>>> sid);
>>> return -ENOMEM;
>>> }
>>> + desc->l2ptr_dma = virt_to_maddr(desc->l2ptr);
>>> arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
>>> arm_smmu_write_strtab_l1_desc(strtab, desc);
>>> @@ -1232,7 +1266,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>>> dev_info(smmu->dev, "unexpected PRI request received:\n");
>>> dev_info(smmu->dev,
>>> - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
>>> + "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova %#" PRIx64 "\n",
>>> sid, ssid, grpid, last ? "L" : "",
>>> evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
>>> evt[0] & PRIQ_0_PERM_READ ? "R" : "",
>>> @@ -1346,6 +1380,8 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
>>> {
>>> arm_smmu_gerror_handler(irq, dev);
>>> arm_smmu_cmdq_sync_handler(irq, dev);
>>> + /*Xen: No threaded irq. So call the required function from here */
>>> + arm_smmu_combined_irq_thread(irq, dev);
>>> return IRQ_WAKE_THREAD;
>>> }
>>> @@ -1358,11 +1394,49 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
>>> arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>>> }
>>> +static void arm_smmu_evtq_thread_xen(int irq, void *dev,
>>> + struct cpu_user_regs *regs)
>>> +{
>>> + arm_smmu_evtq_thread(irq, dev);
>>> +}
>>> +
>>> +static void arm_smmu_priq_thread_xen(int irq, void *dev,
>>> + struct cpu_user_regs *regs)
>>> +{
>>> + arm_smmu_priq_thread(irq, dev);
>>> +}
>>> +
>>> +static void arm_smmu_cmdq_sync_handler_xen(int irq, void *dev,
>>> + struct cpu_user_regs *regs)
>>> +{
>>> + arm_smmu_cmdq_sync_handler(irq, dev);
>>> +}
>>> +
>>> +static void arm_smmu_gerror_handler_xen(int irq, void *dev,
>>> + struct cpu_user_regs *regs)
>>> +{
>>> + arm_smmu_gerror_handler(irq, dev);
>>> +}
>>> +
>>> +static void arm_smmu_combined_irq_handler_xen(int irq, void *dev,
>>> + struct cpu_user_regs *regs)
>>> +{
>>> + arm_smmu_combined_irq_handler(irq, dev);
>>> +}
>>> +
>>
>> Missing:
>> /* Xen: .... */
>>
>>> +#define arm_smmu_evtq_thread arm_smmu_evtq_thread_xen
>>> +#define arm_smmu_priq_thread arm_smmu_priq_thread_xen
>>> +#define arm_smmu_cmdq_sync_handler arm_smmu_cmdq_sync_handler_xen
>>> +#define arm_smmu_gerror_handler arm_smmu_gerror_handler_xen
>>> +#define arm_smmu_combined_irq_handler arm_smmu_combined_irq_handler_xen
>>> +
>>> +#if 0 /*Xen: Unused function */
>>> static void arm_smmu_tlb_sync(void *cookie)
>>> {
>>> struct arm_smmu_domain *smmu_domain = cookie;
>>> __arm_smmu_tlb_sync(smmu_domain->smmu);
>>> }
>>> +#endif
>>> static void arm_smmu_tlb_inv_context(void *cookie)
>>> {
>>> @@ -1383,6 +1457,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>>> __arm_smmu_tlb_sync(smmu);
>>> }
>>> +#if 0 /*Xen: Unused functionality */
>>> static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>>> size_t granule, bool leaf, void *cookie)
>>> {
>>> @@ -1427,6 +1502,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
>>> return false;
>>> }
>>> }
>>> +#endif
>>> static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>>> {
>>> @@ -1474,6 +1550,7 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx)
>>> clear_bit(idx, map);
>>> }
>>> +#if 0
>>> static void arm_smmu_domain_free(struct iommu_domain *domain)
>>> {
>>> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>>> @@ -1502,7 +1579,23 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>>> kfree(smmu_domain);
>>> }
>>> +#endif
>>> +
>>> +static void arm_smmu_domain_free(struct iommu_domain *domain)
>>> +{
>>> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>>> + struct arm_smmu_device *smmu = smmu_domain->smmu;
>>> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>>> + /*
>>> + * Xen: Remove the free functions that are not used and code related
>>> + * to S1 translation. We just need to free the domain and vmid here.
>>> + */
>>
>> Can you please give a reason to remove stage-1 code? This is not in the spririt of a verbatim port and I still can't see why you can't keep it.
>
> I was just clearing it out as it was not used. I will put it back in.
>
>>
>>> + if (cfg->vmid)
>>> + arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>>> + kfree(smmu_domain);
>>> +}
>>> +#if 0 /*Xen: The finalize domain functions are not needed in current form */
>>> static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>>> struct io_pgtable_cfg *pgtbl_cfg)
>>> {
>>> @@ -1551,16 +1644,41 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>>> cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
>>> return 0;
>>> }
>>> +#endif
>>> +
>>> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain)
>>> +{
>>> + int vmid;
>>> + struct arm_smmu_device *smmu = smmu_domain->smmu;
>>> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>>> +
>>> + vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>>> + if (vmid < 0)
>>> + return vmid;
>>> +
>>> + /* Xen: Get the ttbr and vtcr values
>>
>> /*
>> * Xen: ...
>>
>> But why do you need to duplicate the function when you can just replace the 2 lines that needs to be modified?
>>
>>> + * vttbr: This is a shared value with the domain page table
>>> + * vtcr: The TCR settings are the same as CPU since he page
>> s/he/the/
>>
>>> + * tables are shared
>>> + */
>>> +
>>> + cfg->vmid = vmid;
>>> + cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root);
>>> + cfg->vtcr = READ_SYSREG32(VTCR_EL2) & STRTAB_STE_2_VTCR_MASK;
>>
>> I still think this is really fragile. You at least need a comment on the other side (e.g where VTCR_EL2 is written) to explain you are relying the value in other places.
> I can add the comment.
>
>>
>>> + return 0;
>>> +}
>>> static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>>> {
>>> int ret;
>>> +#if 0 /* Xen: pgtbl_cfg not needed. So modify the function as needed */
>>> unsigned long ias, oas;
>>> enum io_pgtable_fmt fmt;
>>> struct io_pgtable_cfg pgtbl_cfg;
>>> struct io_pgtable_ops *pgtbl_ops;
>>> int (*finalise_stage_fn)(struct arm_smmu_domain *,
>>> struct io_pgtable_cfg *);
>>> +#endif
>>> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>>> struct arm_smmu_device *smmu = smmu_domain->smmu;
>>> @@ -1575,6 +1693,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>>> if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
>>> smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
>>> +#if 0
>>> switch (smmu_domain->stage) {
>>> case ARM_SMMU_DOMAIN_S1:
>>> ias = VA_BITS;
>>> @@ -1616,7 +1735,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>>> ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
>>> if (ret < 0)
>>> free_io_pgtable_ops(pgtbl_ops);
>>> +#endif
>>> + ret = arm_smmu_domain_finalise_s2(smmu_domain);
>>> return ret;
>>> }
>>> @@ -1709,7 +1830,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>>> } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>>> ste->s1_cfg = &smmu_domain->s1_cfg;
>>> ste->s2_cfg = NULL;
>>> +#if 0 /*Xen: S1 configuratio not needed */
>>
>> What would be the issue to let this code uncommented
>>
>>> arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
>>> +#endif
>>> } else {
>>> ste->s1_cfg = NULL;
>>> ste->s2_cfg = &smmu_domain->s2_cfg;
>>> @@ -1721,6 +1844,7 @@ out_unlock:
>>> return ret;
>>> }
>>> > +#if 0
>>
>> /* Xen: ... */
>>
>>> static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
>>> phys_addr_t paddr, size_t size, int prot)
>>> {
>>> @@ -1772,6 +1896,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>>> put_device(dev);
>>> return dev ? dev_get_drvdata(dev) : NULL;
>>> }
>>> +#endif
>>> static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>>> {
>>> @@ -1782,8 +1907,9 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>>> return sid < limit;
>>> }
>>> -
>>
>> Please don't remove newline.
>>
>>> +#if 0
>>> static struct iommu_ops arm_smmu_ops;
>>> +#endif
>>> static int arm_smmu_add_device(struct device *dev)
>>> {
>>> @@ -1791,9 +1917,12 @@ static int arm_smmu_add_device(struct device *dev)
>>> struct arm_smmu_device *smmu;
>>> struct arm_smmu_master_data *master;
>>> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>>> +#if 0 /*Xen: iommu_group is not needed */
>>> struct iommu_group *group;
>>> +#endif
>>> - if (!fwspec || fwspec->ops != &arm_smmu_ops)
>>> + /* Xen: fwspec->ops are not needed */
>>> + if (!fwspec)
>>> return -ENODEV;
>>> /*
>>> * We _can_ actually withstand dodgy bus code re-calling add_device()
>>> @@ -1830,6 +1959,12 @@ static int arm_smmu_add_device(struct device *dev)
>>> }
>>> }
>>> +#if 0
>>> +/*
>>> + * Xen: Do not need an iommu group as the stream data is carried by the SMMU
>>> + * master device object
>>> + */
>>
>> This is better to put before #if 0. So IDE will still show the comment even when #if 0 is fold.
>>
>>> +
>>> group = iommu_group_get_for_dev(dev);
>>> if (!IS_ERR(group)) {
>>> iommu_group_put(group);
>>> @@ -1837,8 +1972,16 @@ static int arm_smmu_add_device(struct device *dev)
>>> }
>>> return PTR_ERR_OR_ZERO(group);
>>> +#endif
>>> + return 0;
>>> }
>>> +/*
>>> + * Xen: We can potentially support this function and destroy a device. This
>>> + * will be relevant for PCI hotplug. So, will be implemented as needed after
>>> + * passthrough support is available.
>>> + */
>>> +#if 0
>>> static void arm_smmu_remove_device(struct device *dev)
>>> {
>>> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>>> @@ -1974,7 +2117,7 @@ static struct iommu_ops arm_smmu_ops = {
>>> .put_resv_regions = arm_smmu_put_resv_regions,
>>> .pgsize_bitmap = -1UL, /* Restricted during device attach */
>>> };
>>> -
>>
>> Ditto for the newline. I know I didn't mention it in every place in the previous series. But I would have expected you to apply my comments everywhere.
>>
>>> +#endif
>>> /* Probing and initialisation functions */
>>> static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>>> struct arm_smmu_queue *q,
>>> @@ -1984,13 +2127,19 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>>> {
>>> size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
>>> - q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
>>> + /* The SMMU cache coherency property is always set. Since we are sharing the CPU translation tables
>>
>> /*
>> * ...
>>
>>> + * just make a regular allocation.
>>
>> I am not sure to understand it. AFAIU, q is for the command queue. So how sharing the CPU translation tables will help here?
>>
>> Furthermore, I don't understand how you can say cache coherency property is always set? When I look at the driver, it seems to be able to handle non-coherent memory. So where do you modify that?
>>
>>> + */
>>> + q->base = _xzalloc(qsz, sizeof(void *));
>>> +
>>> if (!q->base) {
>>> dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
>>> qsz);
>>> return -ENOMEM;
>>> }
>>> + q->base_dma = virt_to_maddr(q->base);
>>> +
>>> q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
>>> q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
>>> q->ent_dwords = dwords;
>>> @@ -2056,6 +2205,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>>> u64 reg;
>>> u32 size, l1size;
>>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>>> + u32 alignment;
>>> /* Calculate the L1 size, capped to the SIDSIZE. */
>>> size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
>>> @@ -2069,14 +2219,17 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>>> size, smmu->sid_bits);
>>> l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
>>> - strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
>>> - GFP_KERNEL | __GFP_ZERO);
>>> + alignment = max_t(u32, cfg->num_l1_ents, 64);
>>
>> Same as before. I know I didn't go through the rest of the code. But you could have at least applied my comments on alignment here too. E.g where does the 64 come from?
>>
>> But, it looks like to me you want to create a function dmam_alloc_coherent that will do the allocation for you. This could be used in a few places within file driver...
> dmam_alloc_coherent uses the allocation size as the alignment. This is not as per spec. But that being said I am fine replicating the code from Linux. That will make my life easier :).
>
>>
>>> + strtab = _xzalloc(l1size, l1size);
>>> +
>>> if (!strtab) {
>>> dev_err(smmu->dev,
>>> "failed to allocate l1 stream table (%u bytes)\n",
>>> size);
>>> return -ENOMEM;
>>> }
>>> +
>>> + cfg->strtab_dma = virt_to_maddr(strtab);
>>> cfg->strtab = strtab;
>>> /* Configure strtab_base_cfg for 2 levels */
>>> @@ -2098,14 +2251,16 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
>>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>>> size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
>>> - strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
>>> - GFP_KERNEL | __GFP_ZERO);
>>
>> ... such as here.
>>
>>> + strtab = _xzalloc(size, size);
>>
>> Hmmm, _xzalloc contains the following assert:
>>
>> ASSERT((align & (align - 1)) == 0);
>>
>> How are you sure the size will always honor this check?
> I can add another check or add a comment. Till now the size has passed this check.
>>
>>> +
>>> if (!strtab) {
>>> dev_err(smmu->dev,
>>> "failed to allocate linear stream table (%u bytes)\n",
>>> size);
>>> return -ENOMEM;
>>> }
>>> +
>>> + cfg->strtab_dma = virt_to_maddr(strtab);
>>> cfg->strtab = strtab;
>>> cfg->num_l1_ents = 1 << smmu->sid_bits;
>>> @@ -2182,6 +2337,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
>>> 1, ARM_SMMU_POLL_TIMEOUT_US);
>>> }
>>> +#if 0 /* Xen: There is no MSI support as yet */
>>> static void arm_smmu_free_msis(void *data)
>>> {
>>> struct device *dev = data;
>>> @@ -2247,36 +2403,39 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
>>> /* Add callback to free MSIs on teardown */
>>> devm_add_action(dev, arm_smmu_free_msis, dev);
>>> }
>>> +#endif
>>> static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>>> {
>>> int irq, ret;
>>> +#if 0 /*Xen: Cannot setup msis for now */
>>> arm_smmu_setup_msis(smmu);
>>> +#endif
>>> /* Request interrupt lines */
>>> irq = smmu->evtq.q.irq;
>>> if (irq) {
>>> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
>>> - arm_smmu_evtq_thread,
>>> - IRQF_ONESHOT,
>>> - "arm-smmu-v3-evtq", smmu);
>>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>>
>> Why do you need to set the IRQ type? Can't it be found from the firmware tables?
>>
>>> + ret = request_irq(irq, arm_smmu_evtq_thread,
>>> + 0, "arm-smmu-v3-evtq", smmu);
>>
>> Please create a stub for devm_request_threaded_irq.
>>
>>> if (ret < 0)
>>> dev_warn(smmu->dev, "failed to enable evtq irq\n");
>>> }
>>> irq = smmu->cmdq.q.irq;
>>> if (irq) {
>>> - ret = devm_request_irq(smmu->dev, irq,
>>> - arm_smmu_cmdq_sync_handler, 0,
>>> - "arm-smmu-v3-cmdq-sync", smmu);
>>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>>> + ret = request_irq(irq, arm_smmu_cmdq_sync_handler,
>>> + 0, "arm-smmu-v3-cmdq-sync", smmu);
>>
>> Ditto.
>>
>>> if (ret < 0)
>>> dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
>>> }
>>> irq = smmu->gerr_irq;
>>> if (irq) {
>>> - ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
>>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>>> + ret = request_irq(irq, arm_smmu_gerror_handler,
>>> 0, "arm-smmu-v3-gerror", smmu);
>>
>> Ditto.
>>
>>> if (ret < 0)
>>> dev_warn(smmu->dev, "failed to enable gerror irq\n");
>>> @@ -2284,12 +2443,13 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>>> if (smmu->features & ARM_SMMU_FEAT_PRI) {
>>> irq = smmu->priq.q.irq;
>>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>>> if (irq) {
>>> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
>>> - arm_smmu_priq_thread,
>>> - IRQF_ONESHOT,
>>> - "arm-smmu-v3-priq",
>>> - smmu);
>>> + ret = request_irq(irq,
>>> + arm_smmu_priq_thread,
>>> + 0,
>>> + "arm-smmu-v3-priq",
>>> + smmu);
>>
>> Ditto.
>>
>>> if (ret < 0)
>>> dev_warn(smmu->dev,
>>> "failed to enable priq irq\n");
>>> @@ -2316,11 +2476,11 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
>>> * Cavium ThunderX2 implementation doesn't not support unique
>>> * irq lines. Use single irq line for all the SMMUv3 interrupts.
>>> */
>>> - ret = devm_request_threaded_irq(smmu->dev, irq,
>>> - arm_smmu_combined_irq_handler,
>>> - arm_smmu_combined_irq_thread,
>>> - IRQF_ONESHOT,
>>> - "arm-smmu-v3-combined-irq", smmu);
>>> + ret = request_irq(irq,
>>> + arm_smmu_combined_irq_handler,
>>> + 0,
>>> + "arm-smmu-v3-combined-irq",
>>> + smmu);
>>
>> Ditto. And here a good example where I a stub is good. You set the IRQ type everywere but not for this one.
>>
>>> if (ret < 0)
>>> dev_warn(smmu->dev, "failed to enable combined irq\n");
>>> } else
>>> @@ -2542,8 +2702,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>>> smmu->features |= ARM_SMMU_FEAT_STALLS;
>>> }
>>> +#if 0/* Xen: Do not enable Stage 1 translations */
>>
>> This is just saying stage-1 is available. So why do you care so much to disable it? This is just adding more #if 0, we managed to get away in SMMUv1 by leaving the code as it.
>>
>>> +
>>> if (reg & IDR0_S1P)
>>> smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
>>> +#endif
>>> if (reg & IDR0_S2P)
>>> smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
>>> @@ -2616,10 +2779,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>>> if (reg & IDR5_GRAN4K)
>>> smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
>>> +#if 0 /* Xen: SMMU ops do not have a pgsize_bitmap member for Xen */
>>> if (arm_smmu_ops.pgsize_bitmap == -1UL)
>>> arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
>>> else
>>> arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
>>> +#endif
>>> /* Output address size */
>>> switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
>>> @@ -2646,10 +2811,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>>> smmu->oas = 48;
>>> }
>>> +#if 0 /* Xen: There is no support for DMA mask */
>>
>> Stub it?
>>
>>> /* Set the DMA mask for our table walker */
>>> if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
>>> dev_warn(smmu->dev,
>>> "failed to set DMA mask for table walker\n");
>>> +#endif
>>> smmu->ias = max(smmu->ias, smmu->oas);
>>> @@ -2680,7 +2847,8 @@ static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>>> struct device *dev = smmu->dev;
>>> struct acpi_iort_node *node;
>>> - node = *(struct acpi_iort_node **)dev_get_platdata(dev);
>>> + /* Xen: Modification to get iort_node */
>>> + node = (struct acpi_iort_node *)dev->acpi_node;
>>> /* Retrieve SMMUv3 specific data */
>>> iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
>>> @@ -2703,7 +2871,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>>> static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>>> struct arm_smmu_device *smmu)
>>> {
>>> - struct device *dev = &pdev->dev;
>>> + struct device *dev = pdev;
>>> u32 cells;
>>> int ret = -EINVAL;
>>> @@ -2716,8 +2884,8 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>>> parse_driver_options(smmu);
>>> - if (of_dma_is_coherent(dev->of_node))
>>> - smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>>> + /* Xen: Set the COHERNECY feature */
>>> + smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>>
>> This looks like completely wrong. You should only do it when the firmware tables say it is fine.
>>
>>> return ret;
>>> }
>>> @@ -2734,9 +2902,11 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>>> {
>>> int irq, ret;
>>> struct resource *res;
>>> +#if 0 /*Xen: Do not need to setup sysfs */
>>> resource_size_t ioaddr;
>>> +#endif
>>> struct arm_smmu_device *smmu;
>>> - struct device *dev = &pdev->dev;
>>> + struct device *dev = pdev;/* Xen: dev is ignored */
>>> bool bypass;
>>> smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
>>> @@ -2763,8 +2933,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>>> dev_err(dev, "MMIO region too small (%pr)\n", res);
>>> return -EINVAL;
>>> }
>>> +#if 0 /*Xen: Do not need to setup sysfs */
>>> ioaddr = res->start;
>>> -
>>
>> Again the newline.
>>
>>> +#endif
>>> smmu->base = devm_ioremap_resource(dev, res);
>>> if (IS_ERR(smmu->base))
>>> return PTR_ERR(smmu->base);
>>> @@ -2802,13 +2973,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>>> return ret;
>>> /* Record our private device structure */
>>> +#if 0 /* Xen: SMMU is not treated a a platform device*/
>>> platform_set_drvdata(pdev, smmu);
>>> -
>>
>> Again the newline.
>>
>>> +#endif
>>> /* Reset the device */
>>> ret = arm_smmu_device_reset(smmu, bypass);
>>> if (ret)
>>> return ret;
>>> +/* Xen: Not creating an IOMMU device list for Xen */
>>> +#if 0
>>> /* And we're up. Go go go! */
>>> ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
>>> "smmu3.%pa", &ioaddr);
>>> @@ -2844,9 +3018,18 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>>> if (ret)
>>> return ret;
>>> }
>>> +#endif
>>> + /*
>>> + * Xen: Keep a list of all probed devices. This will be used to query
>>> + * the smmu devices based on the fwnode.
>>> + */
>>> + INIT_LIST_HEAD(&smmu->devices);
>>> + spin_lock(&arm_smmu_devices_lock);
>>> + list_add(&smmu->devices, &arm_smmu_devices);
>>> + spin_unlock(&arm_smmu_devices_lock);
>>> return 0;
>>> }
>>> -
>>
>> Again the newline removed and /* Xen ... */
>>> +#if 0
>>> static int arm_smmu_device_remove(struct platform_device *pdev)
>>> {
>>> struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
>>> @@ -2860,6 +3043,10 @@ static void arm_smmu_device_shutdown(struct platform_device *pdev)
>>> {
>>> arm_smmu_device_remove(pdev);
>>> }
>>> +#endif
>>> +
>>> +#define MODULE_DEVICE_TABLE(type, name)
>>> +#define of_device_id dt_device_match
>>
>> That should be define on top.
>>
>>> static const struct of_device_id arm_smmu_of_match[] = {
>>> { .compatible = "arm,smmu-v3", },
>>> @@ -2867,6 +3054,7 @@ static const struct of_device_id arm_smmu_of_match[] = {
>>> };
>>> MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
>>> +#if 0
>>> static struct platform_driver arm_smmu_driver = {
>>> .driver = {
>>> .name = "arm-smmu-v3",
>>> @@ -2883,3 +3071,318 @@ IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
>>> MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
>>> MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
>>> MODULE_LICENSE("GPL v2");
>>> +#endif
>>> +
>>> +/***** Start of Xen specific code *****/
>>> +
>>> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>>> +{
>>> + struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
>>> + struct iommu_domain *cfg;
>>> +
>>> + spin_lock(&smmu_domain->lock);
>>> + list_for_each_entry(cfg, &smmu_domain->iommu_domains, list) {
>>> + /*
>>> + * Only invalidate the context when SMMU is present.
>>> + * This is because the context initialization is delayed
>>> + * until a master has been added.
>>> + */
>>> + if (unlikely(!ACCESS_ONCE(cfg->priv->smmu)))
>>> + continue;
>>> + arm_smmu_tlb_inv_context(cfg->priv);
>>> + }
>>> + spin_unlock(&smmu_domain->lock);
>>> + return 0;
>>> +}
>>> +
>>> +static int __must_check arm_smmu_iotlb_flush(struct domain *d,
>>> + unsigned long gfn,
>>> + unsigned int page_count)
>>> +{
>>> + return arm_smmu_iotlb_flush_all(d);
>>> +}
>>> +
>>> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
>>> + struct device *dev)
>>> +{
>>> + struct iommu_domain *domain;
>>> + struct arm_smmu_xen_domain *xen_domain;
>>> + struct arm_smmu_device *smmu;
>>> + struct arm_smmu_domain *smmu_domain;
>>> +
>>> + xen_domain = dom_iommu(d)->arch.priv;
>>> +
>>> + smmu = arm_smmu_get_by_fwnode(dev->iommu_fwspec->iommu_fwnode);
>>> + if (!smmu)
>>> + return NULL;
>>> +
>>> + /*
>>> + * Loop through the &xen_domain->contexts to locate a context
>>> + * assigned to this SMMU
>>> + */
>>> + list_for_each_entry(domain, &xen_domain->iommu_domains, list) {
>>> + smmu_domain = to_smmu_domain(domain);
>>> + if (smmu_domain->smmu == smmu)
>>> + return domain;
>>> + }
>>> +
>>> + return NULL;
>>> +}
>>> +
>>> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *domain)
>>> +{
>>> + list_del(&domain->list);
>>> + arm_smmu_domain_free(domain);
>>> +}
>>> +
>>> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
>>> + struct device *dev, u32 flag)
>>> +{
>>> + int ret = 0;
>>> + struct iommu_domain *domain;
>>> + struct arm_smmu_xen_domain *xen_domain;
>>> + struct arm_smmu_domain *arm_smmu;
>>> +
>>> + xen_domain = dom_iommu(d)->arch.priv;
>>> +
>>> + if (!dev->archdata.iommu) {
>>> + dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
>>> + if (!dev->archdata.iommu)
>>> + return -ENOMEM;
>>> + }
>>> +
>>> + ret = arm_smmu_add_device(dev);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + spin_lock(&xen_domain->lock);
>>> +
>>> + /*
>>> + * Check to see if an iommu_domain already exists for this xen domain
>>> + * under the same SMMU
>>> + */
>>> + domain = arm_smmu_get_domain(d, dev);
>>> + if (!domain) {
>>> +
>>> + domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
>>> + if (!domain) {
>>> + ret = -ENOMEM;
>>> + goto out;
>>> + }
>>> +
>>> + arm_smmu = to_smmu_domain(domain);
>>> + arm_smmu->s2_cfg.domain = d;
>>> +
>>> + /* Chain the new context to the domain */
>>> + list_add(&domain->list, &xen_domain->iommu_domains);
>>> +
>>> + }
>>> +
>>> + ret = arm_smmu_attach_dev(domain, dev);
>>> + if (ret) {
>>> + if (domain->ref.counter == 0)
>>> + arm_smmu_destroy_iommu_domain(domain);
>>> + } else {
>>> + atomic_inc(&domain->ref);
>>> + }
>>> +
>>> +out:
>>> + spin_unlock(&xen_domain->lock);
>>> + return ret;
>>> +}
>>> +
>>> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
>>> +{
>>> + struct iommu_domain *domain = arm_smmu_get_domain(d, dev);
>>> + struct arm_smmu_xen_domain *xen_domain;
>>> + struct arm_smmu_domain *arm_smmu = to_smmu_domain(domain);
>>> +
>>> + xen_domain = dom_iommu(d)->arch.priv;
>>> +
>>> + if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
>>> + dev_err(dev, " not attached to domain %d\n", d->domain_id);
>>> + return -ESRCH;
>>> + }
>>> +
>>> + spin_lock(&xen_domain->lock);
>>> +
>>> + arm_smmu_detach_dev(dev);
>>> + atomic_dec(&domain->ref);
>>> +
>>> + if (domain->ref.counter == 0)
>>> + arm_smmu_destroy_iommu_domain(domain);
>>> +
>>> + spin_unlock(&xen_domain->lock);
>>> +
>>> +
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
>>> + u8 devfn, struct device *dev)
>>> +{
>>> + int ret = 0;
>>> +
>>> + /* Don't allow remapping on other domain than hwdom */
>>> + if (t && t != hardware_domain)
>>> + return -EPERM;
>>> +
>>> + if (t == s)
>>> + return 0;
>>> +
>>> + ret = arm_smmu_deassign_dev(s, dev);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (t) {
>>> + /* No flags are defined for ARM. */
>>> + ret = arm_smmu_assign_dev(t, devfn, dev, 0);
>>> + if (ret)
>>> + return ret;
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static int arm_smmu_iommu_domain_init(struct domain *d)
>>> +{
>>> + struct arm_smmu_xen_domain *xen_domain;
>>> +
>>> + xen_domain = xzalloc(struct arm_smmu_xen_domain);
>>> + if (!xen_domain)
>>> + return -ENOMEM;
>>> +
>>> + spin_lock_init(&xen_domain->lock);
>>> + INIT_LIST_HEAD(&xen_domain->iommu_domains);
>>> +
>>> + dom_iommu(d)->arch.priv = xen_domain;
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
>>> +{
>>> +}
>>> +
>>> +static void arm_smmu_iommu_domain_teardown(struct domain *d)
>>> +{
>>> + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
>>> +
>>> + ASSERT(list_empty(&xen_domain->iommu_domains));
>>> + xfree(xen_domain);
>>> +}
>>> +
>>> +static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
>>> + unsigned long mfn, unsigned int flags)
>>> +{
>>> + p2m_type_t t;
>>> +
>>> + /*
>>> + * Grant mappings can be used for DMA requests. The dev_bus_addr
>>> + * returned by the hypercall is the MFN (not the IPA). For device
>>> + * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
>>> + * p2m to allow DMA request to work.
>>> + * This is only valid when the domain is directed mapped. Hence this
>>> + * function should only be used by gnttab code with gfn == mfn.
>>> + */
>>> + BUG_ON(!is_domain_direct_mapped(d));
>>> + BUG_ON(mfn != gfn);
>>> +
>>> + /* We only support readable and writable flags */
>>> + if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
>>> + return -EINVAL;
>>> +
>>> + t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
>>> +
>>> + /*
>>> + * The function guest_physmap_add_entry replaces the current mapping
>>> + * if there is already one...
>>> + */
>>> + return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
>>> +}
>>> +
>>> +static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
>>> +{
>>> + /*
>>> + * This function should only be used by gnttab code when the domain
>>> + * is direct mapped
>>> + */
>>> + if (!is_domain_direct_mapped(d))
>>> + return -EINVAL;
>>> +
>>> + return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
>>> +}
>>> +
>>> +static const struct iommu_ops arm_smmu_iommu_ops = {
>>> + .init = arm_smmu_iommu_domain_init,
>>> + .hwdom_init = arm_smmu_iommu_hwdom_init,
>>> + .teardown = arm_smmu_iommu_domain_teardown,
>>> + .iotlb_flush = arm_smmu_iotlb_flush,
>>> + .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>>> + .assign_device = arm_smmu_assign_dev,
>>> + .reassign_device = arm_smmu_reassign_dev,
>>> + .map_page = arm_smmu_map_page,
>>> + .unmap_page = arm_smmu_unmap_page,
>>> +};
>>> +
>>> +static
>>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>>> +{
>>> + struct arm_smmu_device *smmu = NULL;
>>> +
>>> + spin_lock(&arm_smmu_devices_lock);
>>> + list_for_each_entry(smmu, &arm_smmu_devices, devices) {
>>> + if (smmu->dev->fwnode == fwnode)
>>> + break;
>>> + }
>>> + spin_unlock(&arm_smmu_devices_lock);
>>> +
>>> + return smmu;
>>> +}
>>> +
>>> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>>> + const void *data)
>>> +{
>>> + int rc;
>>> +
>>> + /*
>>> + * Even if the device can't be initialized, we don't want to
>>> + * give the SMMU device to dom0.
>>> + */
>>> + dt_device_set_used_by(dev, DOMID_XEN);
>>> +
>>> + rc = arm_smmu_device_probe(dt_to_dev(dev));
>>> + if (rc)
>>> + return rc;
>>> +
>>> + iommu_set_ops(&arm_smmu_iommu_ops);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>>> + .dt_match = arm_smmu_of_match,
>>> + .init = arm_smmu_dt_init,
>>> +DT_DEVICE_END
>>> +
>>> +#ifdef CONFIG_ACPI
>>> +/* Set up the IOMMU */
>>> +static int __init arm_smmu_acpi_init(const void *data)
>>> +{
>>> + int rc;
>>> + rc = arm_smmu_device_probe((struct device *)data);
>>> +
>>> + if (rc)
>>> + return rc;
>>> +
>>> + iommu_set_ops(&arm_smmu_iommu_ops);
>>> + return 0;
>>> +}
>>> +
>>> +ACPI_DEVICE_START(asmmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>>> + .class_type = ACPI_IORT_NODE_SMMU_V3,
>>> + .init = arm_smmu_acpi_init,
>>> +ACPI_DEVICE_END
>>> +
>>> +#endif
>>>
>> Cheers,
>>
>
> I'll fix the newlines as needed. I thought I had got them all but it seems a few were still missed.
> Thanks,
> Sameer
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 23:26 ` Goel, Sameer
2017-12-06 9:55 ` Julien Grall
@ 2017-12-06 10:01 ` Julien Grall
1 sibling, 0 replies; 19+ messages in thread
From: Julien Grall @ 2017-12-06 10:01 UTC (permalink / raw)
To: Goel, Sameer, xen-devel, julien.grall, mjaggi; +Cc: sstabellini, shankerd
Hi Sameer,
On 12/05/2017 11:26 PM, Goel, Sameer wrote:
> On 12/5/2017 7:17 AM, Julien Grall wrote:
>> On 05/12/17 03:59, Sameer Goel wrote:
>>> + * tables are shared
>>> + */
>>> +
>>> + cfg->vmid = vmid;
>>> + cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root);
>>> + cfg->vtcr = READ_SYSREG32(VTCR_EL2) & STRTAB_STE_2_VTCR_MASK;
>>
>> I still think this is really fragile. You at least need a comment on the other side (e.g where VTCR_EL2 is written) to explain you are relying the value in other places.
> I can add the comment.
Yes please in both side.
>>> + */
>>> + q->base = _xzalloc(qsz, sizeof(void *));
>>> +
>>> if (!q->base) {
>>> dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
>>> qsz);
>>> return -ENOMEM;
>>> }
>>> + q->base_dma = virt_to_maddr(q->base);
>>> +
>>> q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
>>> q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
>>> q->ent_dwords = dwords;
>>> @@ -2056,6 +2205,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>>> u64 reg;
>>> u32 size, l1size;
>>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>>> + u32 alignment;
>>> /* Calculate the L1 size, capped to the SIDSIZE. */
>>> size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
>>> @@ -2069,14 +2219,17 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>>> size, smmu->sid_bits);
>>> l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
>>> - strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
>>> - GFP_KERNEL | __GFP_ZERO);
>>> + alignment = max_t(u32, cfg->num_l1_ents, 64);
>>
>> Same as before. I know I didn't go through the rest of the code. But you could have at least applied my comments on alignment here too. E.g where does the 64 come from?
>>
>> But, it looks like to me you want to create a function dmam_alloc_coherent that will do the allocation for you. This could be used in a few places within file driver...
> dmam_alloc_coherent uses the allocation size as the alignment. This is not as per spec. But that being said I am fine replicating the code from Linux. That will make my life easier :).
I am a bit confused. Does it mean Linux driver violate the spec? If so,
that should be fixed in both and not only Xen.
>
>>
>>> + strtab = _xzalloc(l1size, l1size);
>>> +
>>> if (!strtab) {
>>> dev_err(smmu->dev,
>>> "failed to allocate l1 stream table (%u bytes)\n",
>>> size);
>>> return -ENOMEM;
>>> }
>>> +
>>> + cfg->strtab_dma = virt_to_maddr(strtab);
>>> cfg->strtab = strtab;
>>> /* Configure strtab_base_cfg for 2 levels */
>>> @@ -2098,14 +2251,16 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
>>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>>> size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
>>> - strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
>>> - GFP_KERNEL | __GFP_ZERO);
>>
>> ... such as here.
>>
>>> + strtab = _xzalloc(size, size);
>>
>> Hmmm, _xzalloc contains the following assert:
>>
>> ASSERT((align & (align - 1)) == 0);
>>
>> How are you sure the size will always honor this check?
> I can add another check or add a comment. Till now the size has passed this check.
I was not able to convince myself that:
size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3)
will always honor the check. I would be ok with a comment explain why it
should always work.
Cheers
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 3:59 ` [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver Sameer Goel
2017-12-05 14:17 ` Julien Grall
@ 2017-12-12 8:09 ` Manish Jaggi
2017-12-12 15:34 ` Goel, Sameer
1 sibling, 1 reply; 19+ messages in thread
From: Manish Jaggi @ 2017-12-12 8:09 UTC (permalink / raw)
To: Sameer Goel, xen-devel, julien.grall; +Cc: sstabellini, shankerd
Hi Sameer,
On 12/05/2017 09:29 AM, Sameer Goel wrote:
> +static
> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> + struct arm_smmu_device *smmu = NULL;
> +
> + spin_lock(&arm_smmu_devices_lock);
> + list_for_each_entry(smmu, &arm_smmu_devices, devices) {
> + if (smmu->dev->fwnode == fwnode)
Shoudnt it be
if (smmu->dev->iommu_fwspec->iommu_fwnode == fwnode)
> + break;
> + }
> + spin_unlock(&arm_smmu_devices_lock);
> +
> + return smmu;
> +}
> +
-Manish
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-12 8:09 ` Manish Jaggi
@ 2017-12-12 15:34 ` Goel, Sameer
0 siblings, 0 replies; 19+ messages in thread
From: Goel, Sameer @ 2017-12-12 15:34 UTC (permalink / raw)
To: Manish Jaggi, xen-devel, julien.grall; +Cc: sstabellini, shankerd
On 12/12/2017 1:09 AM, Manish Jaggi wrote:
> Hi Sameer,
>
> On 12/05/2017 09:29 AM, Sameer Goel wrote:
>
>> +static
>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>> +{
>> + struct arm_smmu_device *smmu = NULL;
>> +
>> + spin_lock(&arm_smmu_devices_lock);
>> + list_for_each_entry(smmu, &arm_smmu_devices, devices) {
>> + if (smmu->dev->fwnode == fwnode)
>
> Shoudnt it be
>
> if (smmu->dev->iommu_fwspec->iommu_fwnode == fwnode)
>
It was working absolutely fine with the prior patch set for IORT that I had posted :). I had added fwnode as a part of the device structure.
I am fine if you want to change this.
>> + break;
>> + }
>> + spin_unlock(&arm_smmu_devices_lock);
>> +
>> + return smmu;
>> +}
>> +
>
> -Manish
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 2/4] xen/linux_compat: Add a Linux compat header
2017-12-05 12:31 ` Julien Grall
@ 2017-12-15 22:32 ` Goel, Sameer
2017-12-15 22:39 ` Julien Grall
0 siblings, 1 reply; 19+ messages in thread
From: Goel, Sameer @ 2017-12-15 22:32 UTC (permalink / raw)
To: Julien Grall, xen-devel, julien.grall, mjaggi
Cc: sstabellini, wei.liu2, george.dunlap, Andrew.Cooper3, jbeulich,
Ian.Jackson, nd, shankerd
On 12/5/2017 5:31 AM, Julien Grall wrote:
> Hi Sameer,
>
> On 05/12/17 03:59, Sameer Goel wrote:
>> For porting files from Linux it is useful to have a Linux API to Xen API
>> mapping header at a common location.
>> This file adds common API functions and other defines that are needed for
>> porting arm SMMU drivers.
>>
>> ---
>> xen/include/xen/linux_compat.h | 106 +++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 106 insertions(+)
>> create mode 100644 xen/include/xen/linux_compat.h
>>
>> diff --git a/xen/include/xen/linux_compat.h b/xen/include/xen/linux_compat.h
>> new file mode 100644
>> index 0000000..217e0cc
>> --- /dev/null
>> +++ b/xen/include/xen/linux_compat.h
>> @@ -0,0 +1,106 @@
>> +/******************************************************************************
>> + * include/xen/linux_compat.h
>> + *
>> + * Compatibility defines for porting code from Linux to Xen
>> + *
>> + * Copyright (c) 2017 Linaro Limited
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __XEN_LINUX_COMPAT_H__
>> +#define __XEN_LINUX_COMPAT_H__
>> +
>> +#include <asm/types.h>
>> +
>> +typedef paddr_t phys_addr_t;
>> +typedef paddr_t dma_addr_t;
>> +
>> +/* Alias to Xen device tree helpers */
>> +#define device_node dt_device_node
>> +#define of_phandle_args dt_phandle_args
>> +#define of_device_id dt_device_match
>> +#define of_match_node dt_match_node
>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
>> +#define of_property_read_bool dt_property_read_bool
>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>> +/* The user should consider if it is safe to treat mutex as a spinlock */
>
> I am against defining mutex as spinlock in a generic header. People will overlook it and it is hardly going to be detected in a verbatim port.
>
> This should be done on the case by case basis.
>
>> +#define mutex spinlock_t
>> +#define mutex_init spin_lock_init
>> +#define mutex_lock spin_lock
>> +#define mutex_unlock spin_unlock
>> +
>> +#define ilog2 LOG_2
>
> There is only one user of LOG_2 in Xen. So wouldn't it be better to rename directly to ilog2?
Its used in a couple of places (x86_64/asm-offsets.c). I can change that file too, let me know what you think. I am keeping this for now.
>
>> +
>> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
>> +({ \
>> + s_time_t deadline = NOW() + MICROSECS(timeout_us); \
>> + for (;;) \
>> + { \
>> + (val) = op(addr); \
>> + if ( cond ) \
>> + break; \
>> + if ( NOW() > deadline ) \
>> + { \
>> + (val) = op(addr); \
>> + break; \
>> + } \
>> + udelay(sleep_us); \
>> + } \
>> + (cond) ? 0 : -ETIMEDOUT; \
>> +})
>> +
>> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
>> + readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
>
> I don't think putting read* macros in a common header is necessary. Their use in Linux is very limited.
>
>> +
>> +/* Xen: Helpers for IRQ functions */
>> +#define request_irq(irq, func, flags, name, dev) request_irq(irq, flags, func, name, dev)
>> +#define free_irq release_irq
>> +
>> +enum irqreturn {
>> + IRQ_NONE = (0 << 0),
>> + IRQ_HANDLED = (1 << 0),
>> + IRQ_WAKE_THREAD = (2 << 0),
>> +};
>> +
>> +typedef enum irqreturn irqreturn_t;
>> +
>> +/* Device logger functions */
>> +#define dev_print(dev, lvl, fmt, ...) \
>> + printk(lvl fmt, ## __VA_ARGS__)
>> +
>> +#define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
>> +#define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
>> +#define dev_warn(dev, fmt, ...) dev_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
>> +#define dev_err(dev, fmt, ...) dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
>> +#define dev_info(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
>> +
>> +#define dev_err_ratelimited(dev, fmt, ...) \
>> + dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
>> +
>> +#define dev_name(dev) dt_node_full_name(dev_to_dt(dev))
>> +
>> +/* Alias to Xen allocation helpers */
>> +#define kfree xfree
>> +#define kmalloc(size, flags) _xmalloc(size, sizeof(void *))
>> +#define kzalloc(size, flags) _xzalloc(size, sizeof(void *))
>> +#define devm_kzalloc(dev, size, flags) _xzalloc(size, sizeof(void *))
>> +#define kmalloc_array(size, n, flags) _xmalloc_array(size, sizeof(void *), n)
>> +
>> +/* Alias to Xen time functions */
>> +#define ktime_t s_time_t
>> +#define ktime_add_us(t,i) (NOW() + MICROSECS(i))
>> +#define ktime_compare(t,i) (NOW() > (i))
>> +
>> +#endif /* __XEN_LINUX_COMPAT_H__ */
>>
>
> Cheers,
>
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 2/4] xen/linux_compat: Add a Linux compat header
2017-12-15 22:32 ` Goel, Sameer
@ 2017-12-15 22:39 ` Julien Grall
0 siblings, 0 replies; 19+ messages in thread
From: Julien Grall @ 2017-12-15 22:39 UTC (permalink / raw)
To: Goel, Sameer, xen-devel, julien.grall, mjaggi
Cc: sstabellini, wei.liu2, george.dunlap, Andrew.Cooper3, jbeulich,
Ian.Jackson, nd, shankerd
On 15/12/2017 22:32, Goel, Sameer wrote:
>
>
> On 12/5/2017 5:31 AM, Julien Grall wrote:
>> Hi Sameer,
>>
>> On 05/12/17 03:59, Sameer Goel wrote:
>>> For porting files from Linux it is useful to have a Linux API to Xen API
>>> mapping header at a common location.
>>> This file adds common API functions and other defines that are needed for
>>> porting arm SMMU drivers.
>>>
>>> ---
>>> xen/include/xen/linux_compat.h | 106 +++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 106 insertions(+)
>>> create mode 100644 xen/include/xen/linux_compat.h
>>>
>>> diff --git a/xen/include/xen/linux_compat.h b/xen/include/xen/linux_compat.h
>>> new file mode 100644
>>> index 0000000..217e0cc
>>> --- /dev/null
>>> +++ b/xen/include/xen/linux_compat.h
>>> @@ -0,0 +1,106 @@
>>> +/******************************************************************************
>>> + * include/xen/linux_compat.h
>>> + *
>>> + * Compatibility defines for porting code from Linux to Xen
>>> + *
>>> + * Copyright (c) 2017 Linaro Limited
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify
>>> + * it under the terms of the GNU General Public License as published by
>>> + * the Free Software Foundation; either version 2 of the License, or
>>> + * (at your option) any later version.
>>> + *
>>> + * This program is distributed in the hope that it will be useful,
>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>> + * GNU General Public License for more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License
>>> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
>>> + */
>>> +
>>> +#ifndef __XEN_LINUX_COMPAT_H__
>>> +#define __XEN_LINUX_COMPAT_H__
>>> +
>>> +#include <asm/types.h>
>>> +
>>> +typedef paddr_t phys_addr_t;
>>> +typedef paddr_t dma_addr_t;
>>> +
>>> +/* Alias to Xen device tree helpers */
>>> +#define device_node dt_device_node
>>> +#define of_phandle_args dt_phandle_args
>>> +#define of_device_id dt_device_match
>>> +#define of_match_node dt_match_node
>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
>>> +#define of_property_read_bool dt_property_read_bool
>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>>> +/* The user should consider if it is safe to treat mutex as a spinlock */
>>
>> I am against defining mutex as spinlock in a generic header. People will overlook it and it is hardly going to be detected in a verbatim port.
>>
>> This should be done on the case by case basis.
>>
>>> +#define mutex spinlock_t
>>> +#define mutex_init spin_lock_init
>>> +#define mutex_lock spin_lock
>>> +#define mutex_unlock spin_unlock
>>> +
>>> +#define ilog2 LOG_2
>>
>> There is only one user of LOG_2 in Xen. So wouldn't it be better to rename directly to ilog2?
>
> Its used in a couple of places (x86_64/asm-offsets.c). I can change that file too, let me know what you think. I am keeping this for now.
There are exactly one place in x86_64/asm-offsets.c. So please rename it
rather than adding yet another alias.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 14:17 ` Julien Grall
2017-12-05 23:26 ` Goel, Sameer
@ 2017-12-15 22:45 ` Goel, Sameer
2017-12-16 6:05 ` Goel, Sameer
2 siblings, 0 replies; 19+ messages in thread
From: Goel, Sameer @ 2017-12-15 22:45 UTC (permalink / raw)
To: Julien Grall, xen-devel, julien.grall, mjaggi; +Cc: sstabellini, shankerd
On 12/5/2017 7:17 AM, Julien Grall wrote:
> Hello,
>
> On 05/12/17 03:59, Sameer Goel wrote:
>> This driver follows an approach similar to smmu driver. The intent here
>> is to reuse as much Linux code as possible.
>> - Glue code has been introduced in headers to bridge the API calls.
>> - Called Linux functions from the Xen IOMMU function calls.
>> - Xen modifications are preceded by /*Xen: comment */
>> - New config items for SMMUv3 and legacy SMMU have been defined.
>
> There are no reason to touch legacy SMMU in this patch. Please move that outside of it.
Do you want me to remove the config item for Legacy SMMU?
>
>>
>> Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
>> ---
>> xen/drivers/Kconfig | 2 +
>> xen/drivers/passthrough/arm/Kconfig | 14 +
>> xen/drivers/passthrough/arm/Makefile | 3 +-
>> xen/drivers/passthrough/arm/arm_smmu.h | 189 ++++++++++
>> xen/drivers/passthrough/arm/smmu-v3.c | 619 ++++++++++++++++++++++++++++++---
>> 5 files changed, 768 insertions(+), 59 deletions(-)
>> create mode 100644 xen/drivers/passthrough/arm/Kconfig
>> create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
>>
>> diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
>> index bc3a54f..6126553 100644
>> --- a/xen/drivers/Kconfig
>> +++ b/xen/drivers/Kconfig
>> @@ -12,4 +12,6 @@ source "drivers/pci/Kconfig"
>> source "drivers/video/Kconfig"
>> +source "drivers/passthrough/arm/Kconfig"
>> +
>> endmenu
>> diff --git a/xen/drivers/passthrough/arm/Kconfig b/xen/drivers/passthrough/arm/Kconfig
>> new file mode 100644
>> index 0000000..9ac4cea
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/Kconfig
>> @@ -0,0 +1,14 @@
>> +
>> +config ARM_SMMU
>> + bool "ARM SMMU v1/2 support"
>> + depends on ARM_64
>
> Why? SMMUv1 and SMMUv2 supports Arm 32-bit.
>
>> + help
>> + Support for implementations of the ARM System MMU architecture. (1/2)
>
> I am not sure to understand the (1/2) after the final point.
>
>> +
>> +config ARM_SMMU_v3
>> + bool "ARM SMMUv3 Support"
>> + depends on ARM_64
>> + help
>> + Support for implementations of the ARM System MMU architecture
>> + version 3.
>> +
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
2017-12-05 14:17 ` Julien Grall
2017-12-05 23:26 ` Goel, Sameer
2017-12-15 22:45 ` Goel, Sameer
@ 2017-12-16 6:05 ` Goel, Sameer
2 siblings, 0 replies; 19+ messages in thread
From: Goel, Sameer @ 2017-12-16 6:05 UTC (permalink / raw)
To: Julien Grall, xen-devel, julien.grall, mjaggi; +Cc: sstabellini, shankerd
On 12/5/2017 7:17 AM, Julien Grall wrote:
> Hello,
>
> On 05/12/17 03:59, Sameer Goel wrote:
>> This driver follows an approach similar to smmu driver. The intent here
>> is to reuse as much Linux code as possible.
>> - Glue code has been introduced in headers to bridge the API calls.
>> - Called Linux functions from the Xen IOMMU function calls.
>> - Xen modifications are preceded by /*Xen: comment */
>> - New config items for SMMUv3 and legacy SMMU have been defined.
>
> There are no reason to touch legacy SMMU in this patch. Please move that outside of it.
Agreed.
>
>>
>> Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
>> ---
>> xen/drivers/Kconfig | 2 +
>> xen/drivers/passthrough/arm/Kconfig | 14 +
>> xen/drivers/passthrough/arm/Makefile | 3 +-
>> xen/drivers/passthrough/arm/arm_smmu.h | 189 ++++++++++
>> xen/drivers/passthrough/arm/smmu-v3.c | 619 ++++++++++++++++++++++++++++++---
>> 5 files changed, 768 insertions(+), 59 deletions(-)
>> create mode 100644 xen/drivers/passthrough/arm/Kconfig
>> create mode 100644 xen/drivers/passthrough/arm/arm_smmu.h
>>
>> diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
>> index bc3a54f..6126553 100644
>> --- a/xen/drivers/Kconfig
>> +++ b/xen/drivers/Kconfig
>> @@ -12,4 +12,6 @@ source "drivers/pci/Kconfig"
>> source "drivers/video/Kconfig"
>> +source "drivers/passthrough/arm/Kconfig"
>> +
>> endmenu
>> diff --git a/xen/drivers/passthrough/arm/Kconfig b/xen/drivers/passthrough/arm/Kconfig
>> new file mode 100644
>> index 0000000..9ac4cea
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/Kconfig
>> @@ -0,0 +1,14 @@
>> +
>> +config ARM_SMMU
>> + bool "ARM SMMU v1/2 support"
>> + depends on ARM_64
>
> Why? SMMUv1 and SMMUv2 supports Arm 32-bit.
>
>> + help
>> + Support for implementations of the ARM System MMU architecture. (1/2)
>
> I am not sure to understand the (1/2) after the final point.
I'll fix this.
>
>> +
>> +config ARM_SMMU_v3
>> + bool "ARM SMMUv3 Support"
>> + depends on ARM_64
>> + help
>> + Support for implementations of the ARM System MMU architecture
>> + version 3.
>> +
>> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
>> index f4cd26e..5b3eb15 100644
>> --- a/xen/drivers/passthrough/arm/Makefile
>> +++ b/xen/drivers/passthrough/arm/Makefile
>> @@ -1,2 +1,3 @@
>> obj-y += iommu.o
>> -obj-y += smmu.o
>> +obj-$(CONFIG_ARM_SMMU) += smmu.o
>> +obj-$(CONFIG_ARM_SMMU_v3) += smmu-v3.o
>> diff --git a/xen/drivers/passthrough/arm/arm_smmu.h b/xen/drivers/passthrough/arm/arm_smmu.h
>> new file mode 100644
>> index 0000000..b5e161f
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/arm_smmu.h
>
> I don't think there are any value to use Linux coding style in this header. It contains Xen stubs.
>
> I would also have expected this new file to come in a separate patch with the modification associated in SMMUv2. This would make easier to see what could be common.
I'll make this a separate patch.
>
>> @@ -0,0 +1,189 @@
>> +/******************************************************************************
>> + * ./arm_smmu.h
>> + *
>> + * Common compatibility defines and data_structures for porting arm smmu
>> + * drivers from Linux.
>
> [...]
>
>> +static struct resource *platform_get_resource(struct platform_device *pdev,
>> + unsigned int type,
>> + unsigned int num)
>> +{
>> + /*
>> + * The resource is only used between 2 calls of platform_get_resource.
>> + * It's quite ugly but it's avoid to add too much code in the part
>> + * imported from Linux
>> + */
>> + static struct resource res;
>> + struct acpi_iort_node *iort_node;
>> + struct acpi_iort_smmu_v3 *node_smmu_data;
>> + int ret = 0;
>> +
>> + res.type = type;
>> +
>> + switch (type) {
>> + case IORESOURCE_MEM:
>> + if (pdev->type == DEV_ACPI) {
>> + ret = 1;
>> + iort_node = pdev->acpi_node;
>> + node_smmu_data =
>> + (struct acpi_iort_smmu_v3 *)iort_node->node_data;
>
> Above you say: "Common compatibility defines and data_structures for porting arm smmu driver from Linux". But this code is clearly SMMUv3.
>
>> +
>> + if (node_smmu_data != NULL) {
>> + res.addr = node_smmu_data->base_address;
>> + res.size = SZ_128K;
>> + ret = 0;
>> + }
>> + } else {
>> + ret = dt_device_get_address(dev_to_dt(pdev), num,
>> + &res.addr, &res.size);
>> + }
>> +
>> + return ((ret) ? NULL : &res);
>> +
>> + case IORESOURCE_IRQ:
>> + /* ACPI case not implemented as there is no use case for it */
>> + ret = platform_get_irq(dev_to_dt(pdev), num);
>> +
>> + if (ret < 0)
>> + return NULL;
>> +
>> + res.addr = ret;
>> + res.size = 1;
>> +
>> + return &res;
>> +
>> + default:
>> + return NULL;
>> + }
>> +}
>> +
>> +static int platform_get_irq_byname(struct platform_device *pdev, const char *name)
>> +{
>> + const struct dt_property *dtprop;
>> + struct acpi_iort_node *iort_node;
>> + struct acpi_iort_smmu_v3 *node_smmu_data;
>> + int ret = 0;
>> +
>> + if (pdev->type == DEV_ACPI) {
>> + iort_node = pdev->acpi_node;
>> + node_smmu_data = (struct acpi_iort_smmu_v3 *)iort_node->node_data;
>
> Ditto.
>
>> +
>> + if (node_smmu_data != NULL) {
>> + if (!strcmp(name, "eventq"))
>> + ret = node_smmu_data->event_gsiv;
>> + else if (!strcmp(name, "priq"))
>> + ret = node_smmu_data->pri_gsiv;
>> + else if (!strcmp(name, "cmdq-sync"))
>> + ret = node_smmu_data->sync_gsiv;
>> + else if (!strcmp(name, "gerror"))
>> + ret = node_smmu_data->gerr_gsiv;
>> + else
>> + ret = -EINVAL;
>> + }
>> + } else {
>> + dtprop = dt_find_property(dev_to_dt(pdev), "interrupt-names", NULL);
>> + if (!dtprop)
>> + return -EINVAL;
>> +
>> + if (!dtprop->value)
>> + return -ENODATA;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +/* Xen: Stub out DMA domain related functions */
>
> I don't think 'Xen:' is necessary as this file contains Xen stubs.
>
>> +#define iommu_get_dma_cookie(dom) 0
>> +#define iommu_put_dma_cookie(dom) 0
>> +
>> +static void __iomem *devm_ioremap_resource(struct device *dev,
>> + struct resource *res)
>> +{
>> + void __iomem *ptr;
>> +
>> + if (!res || res->type != IORESOURCE_MEM) {
>> + dev_err(dev, "Invalid resource\n");
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + ptr = ioremap_nocache(res->addr, res->size);
>> + if (!ptr) {
>> + dev_err(dev,
>> + "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
>> + res->addr, res->size);
>> + return ERR_PTR(-ENOMEM);
>> + }
>> +
>> + return ptr;
>> +}
>> +
>> +/* Xen: Dummy iommu_domain */
>> +struct iommu_domain {
>> + /* Runtime SMMU configuration for this iommu_domain */
>> + struct arm_smmu_domain *priv;
>> + unsigned int type;
>
> What are the values for type?
Not needed, will remove.
>
>> +
>> + atomic_t ref;
>> + /* Used to link iommu_domain contexts for a same domain.
>
> /*
> * Used ...
> */
>
>> + * There is at least one per-SMMU to used by the domain.
>> + */
>> + struct list_head list;
>> +};
>> +/* Xen: Domain type definitions. Not really needed for Xen, defining to port
>
> /*
> * Xen: ...
>
>> + * Linux code as-is
>> + */
>> +#define IOMMU_DOMAIN_UNMANAGED 0
>> +#define IOMMU_DOMAIN_DMA 1
>> +#define IOMMU_DOMAIN_IDENTITY 2
>> +
>> +/* Xen: Describes information required for a Xen domain */
>> +struct arm_smmu_xen_domain {
>> + spinlock_t lock;
>> + /* List of iommu domains associated to this domain */
>> + struct list_head iommu_domains;
>> +};
>> +
>> +/*
>> + * Xen: Information about each device stored in dev->archdata.iommu
>> + *
>> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
>> + * the SMMU).
>> + */
>> +struct arm_smmu_xen_device {
>> + struct iommu_domain *domain;
>> +};
>> +
>> +#endif /* __ARM_SMMU_H__ */
>
> Missing emacs magic.
>
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>> index e67ba6c..c6c1b99 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -18,28 +18,38 @@
>> * Author: Will Deacon <will.deacon@arm.com>
>> *
>> * This driver is powered by bad coffee and bombay mix.
>> + *
>> + *
>> + * Based on Linux drivers/iommu/arm-smmu-v3.c
>> + * => commit 7aa8619a66aea52b145e04cbab4f8d6a4e5f3f3b
>> + *
>> + * Xen modifications:
>> + * Sameer Goel <sameer.goel@linaro.org>
>> + * Copyright (C) 2017, The Linux Foundation, All rights reserved.
>> + *
>> */
>> -#include <linux/acpi.h>
>> -#include <linux/acpi_iort.h>
>> -#include <linux/delay.h>
>> -#include <linux/dma-iommu.h>
>> -#include <linux/err.h>
>> -#include <linux/interrupt.h>
>> -#include <linux/iommu.h>
>> -#include <linux/iopoll.h>
>> -#include <linux/module.h>
>> -#include <linux/msi.h>
>> -#include <linux/of.h>
>> -#include <linux/of_address.h>
>> -#include <linux/of_iommu.h>
>> -#include <linux/of_platform.h>
>> -#include <linux/pci.h>
>> -#include <linux/platform_device.h>
>> -
>> -#include <linux/amba/bus.h>
>> -
>> -#include "io-pgtable.h"
>> +#include <xen/acpi.h>
>> +#include <xen/config.h>
>> +#include <xen/delay.h>
>> +#include <xen/errno.h>
>> +#include <xen/err.h>
>> +#include <xen/irq.h>
>> +#include <xen/lib.h>
>> +#include <xen/linux_compat.h>
>> +#include <xen/list.h>
>> +#include <xen/mm.h>
>> +#include <xen/rbtree.h>
>> +#include <xen/sched.h>
>> +#include <xen/sizes.h>
>> +#include <xen/vmap.h>
>> +#include <acpi/acpi_iort.h>
>> +#include <asm/atomic.h>
>> +#include <asm/device.h>
>> +#include <asm/io.h>
>> +#include <asm/platform.h>
>> +
>> +#include "arm_smmu.h" /* Not a self contained header. So last in the list */
>> /* MMIO registers */
>> #define ARM_SMMU_IDR0 0x0
>> @@ -423,9 +433,12 @@
>> #endif
>> static bool disable_bypass;
>> +
>> +#if 0 /* Xen: Not applicable for Xen */
>> module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
>> MODULE_PARM_DESC(disable_bypass,
>> "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
>> +#endif
>
> Can't you stub module_param_namde and MODULE_PARM_DESC to avoid #if 0?
Ok.
>
>> enum pri_resp {
>> PRI_RESP_DENY,
>> @@ -433,6 +446,7 @@ enum pri_resp {
>> PRI_RESP_SUCC,
>> };
>> +#if 0 /* Xen: No MSI support in this iteration */
>> enum arm_smmu_msi_index {
>> EVTQ_MSI_INDEX,
>> GERROR_MSI_INDEX,
>> @@ -457,6 +471,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
>> ARM_SMMU_PRIQ_IRQ_CFG2,
>> },
>> };
>> +#endif
>> struct arm_smmu_cmdq_ent {
>> /* Common fields */
>> @@ -561,6 +576,8 @@ struct arm_smmu_s2_cfg {
>> u16 vmid;
>> u64 vttbr;
>> u64 vtcr;
>> + /* Xen: Domain associated to this configuration */
>> + struct domain *domain;
>> };
>> struct arm_smmu_strtab_ent {
>> @@ -635,9 +652,21 @@ struct arm_smmu_device {
>> struct arm_smmu_strtab_cfg strtab_cfg;
>> /* IOMMU core code handle */
>> +#if 0 /*Xen: Generic iommu_device ref not needed here */
>> struct iommu_device iommu;
>> +#endif
>> + /* Xen: Need to keep a list of SMMU devices */
>> + struct list_head devices;
>> };
>> +/* Xen: Keep a list of devices associated with this driver */
>> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
>> +static LIST_HEAD(arm_smmu_devices);
>> +/* Xen: Helper for finding a device using fwnode */
>> +static
>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode);
>> +
>> +
>> /* SMMU private data for each master */
>> struct arm_smmu_master_data {
>> struct arm_smmu_device *smmu;
>> @@ -654,7 +683,7 @@ enum arm_smmu_domain_stage {
>> struct arm_smmu_domain {
>> struct arm_smmu_device *smmu;
>> - struct mutex init_mutex; /* Protects smmu pointer */
>> + mutex init_mutex; /* Protects smmu pointer */
>> struct io_pgtable_ops *pgtbl_ops;
>> @@ -961,6 +990,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>> spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>> }
>> +#if 0 /*Xen: Comment out functions that set up S1 translations */
>
> Why? I do agree that the code will not be used by Xen, but I would prefer if you minimize the number of #ifdef.
Ok.
>
>> /* Context descriptor manipulation functions */
>> static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
>> {
>> @@ -1003,6 +1033,7 @@ static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
>> cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
>> }
>> +#endif
>> /* Stream table manipulation functions */
>> static void
>> @@ -1164,6 +1195,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>> void *strtab;
>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>> struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
>> + u32 alignment = 0;
>
> It is not necassary to initialize alignment. Also we are trying to limit the use of u* in favor of uint32_t.
Ok. The specific alignment in Linux kernel is forced by aligning the memory to the size sent in. I have created
a macro for dmam_alloc_coherent.
>
>> if (desc->l2ptr)
>> return 0;
>> @@ -1172,14 +1204,16 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>> strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
>> desc->span = STRTAB_SPLIT + 1;
>> - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
>> - GFP_KERNEL | __GFP_ZERO);
>> + /* Alignment picked from ARM SMMU arch version 3.x. L1ST.L2Ptr */
>> + alignment = 1 << ((5 + (desc->span - 1)));
>> + desc->l2ptr = _xzalloc(size, alignment);
>> if (!desc->l2ptr) {
>> dev_err(smmu->dev,
>> "failed to allocate l2 stream table for SID %u\n",
>> sid);
>> return -ENOMEM;
>> }
>> + desc->l2ptr_dma = virt_to_maddr(desc->l2ptr);
>> arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
>> arm_smmu_write_strtab_l1_desc(strtab, desc);
>> @@ -1232,7 +1266,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>> dev_info(smmu->dev, "unexpected PRI request received:\n");
>> dev_info(smmu->dev,
>> - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
>> + "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova %#" PRIx64 "\n",
>> sid, ssid, grpid, last ? "L" : "",
>> evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
>> evt[0] & PRIQ_0_PERM_READ ? "R" : "",
>> @@ -1346,6 +1380,8 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
>> {
>> arm_smmu_gerror_handler(irq, dev);
>> arm_smmu_cmdq_sync_handler(irq, dev);
>> + /*Xen: No threaded irq. So call the required function from here */
>> + arm_smmu_combined_irq_thread(irq, dev);
>> return IRQ_WAKE_THREAD;
>> }
>> @@ -1358,11 +1394,49 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
>> arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> }
>> +static void arm_smmu_evtq_thread_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_evtq_thread(irq, dev);
>> +}
>> +
>> +static void arm_smmu_priq_thread_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_priq_thread(irq, dev);
>> +}
>> +
>> +static void arm_smmu_cmdq_sync_handler_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_cmdq_sync_handler(irq, dev);
>> +}
>> +
>> +static void arm_smmu_gerror_handler_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_gerror_handler(irq, dev);
>> +}
>> +
>> +static void arm_smmu_combined_irq_handler_xen(int irq, void *dev,
>> + struct cpu_user_regs *regs)
>> +{
>> + arm_smmu_combined_irq_handler(irq, dev);
>> +}
>> +
>
> Missing:
> /* Xen: .... */
Ok.
>
>> +#define arm_smmu_evtq_thread arm_smmu_evtq_thread_xen
>> +#define arm_smmu_priq_thread arm_smmu_priq_thread_xen
>> +#define arm_smmu_cmdq_sync_handler arm_smmu_cmdq_sync_handler_xen
>> +#define arm_smmu_gerror_handler arm_smmu_gerror_handler_xen
>> +#define arm_smmu_combined_irq_handler arm_smmu_combined_irq_handler_xen
>> +
>> +#if 0 /*Xen: Unused function */
>> static void arm_smmu_tlb_sync(void *cookie)
>> {
>> struct arm_smmu_domain *smmu_domain = cookie;
>> __arm_smmu_tlb_sync(smmu_domain->smmu);
>> }
>> +#endif
>> static void arm_smmu_tlb_inv_context(void *cookie)
>> {
>> @@ -1383,6 +1457,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>> __arm_smmu_tlb_sync(smmu);
>> }
>> +#if 0 /*Xen: Unused functionality */
>> static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>> size_t granule, bool leaf, void *cookie)
>> {
>> @@ -1427,6 +1502,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
>> return false;
>> }
>> }
>> +#endif
>> static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>> {
>> @@ -1474,6 +1550,7 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx)
>> clear_bit(idx, map);
>> }
>> +#if 0
>> static void arm_smmu_domain_free(struct iommu_domain *domain)
>> {
>> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>> @@ -1502,7 +1579,23 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>> kfree(smmu_domain);
>> }
>> +#endif
>> +
>> +static void arm_smmu_domain_free(struct iommu_domain *domain)
>> +{
>> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>> + struct arm_smmu_device *smmu = smmu_domain->smmu;
>> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>> + /*
>> + * Xen: Remove the free functions that are not used and code related
>> + * to S1 translation. We just need to free the domain and vmid here.
>> + */
>
> Can you please give a reason to remove stage-1 code? This is not in the spririt of a verbatim port and I still can't see why you can't keep it.
Have restored the code.
>
>> + if (cfg->vmid)
>> + arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>> + kfree(smmu_domain);
>> +}
>> +#if 0 /*Xen: The finalize domain functions are not needed in current form */
>> static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>> struct io_pgtable_cfg *pgtbl_cfg)
>> {
>> @@ -1551,16 +1644,41 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>> cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
>> return 0;
>> }
>> +#endif
>> +
>> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain)
>> +{
>> + int vmid;
>> + struct arm_smmu_device *smmu = smmu_domain->smmu;
>> + struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>> +
>> + vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>> + if (vmid < 0)
>> + return vmid;
>> +
>> + /* Xen: Get the ttbr and vtcr values
>
> /*
> * Xen: ...
>
> But why do you need to duplicate the function when you can just replace the 2 lines that needs to be modified?
>
Fixed.
>> + * vttbr: This is a shared value with the domain page table
>> + * vtcr: The TCR settings are the same as CPU since he page
> s/he/the/
>
Ok
>> + * tables are shared
>> + */
>> +
>> + cfg->vmid = vmid;
>> + cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root);
>> + cfg->vtcr = READ_SYSREG32(VTCR_EL2) & STRTAB_STE_2_VTCR_MASK;
>
> I still think this is really fragile. You at least need a comment on the other side (e.g where VTCR_EL2 is written) to explain you are relying the value in other places.
>
Added a comment.
>> + return 0;
>> +}
>> static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>> {
>> int ret;
>> +#if 0 /* Xen: pgtbl_cfg not needed. So modify the function as needed */
>> unsigned long ias, oas;
>> enum io_pgtable_fmt fmt;
>> struct io_pgtable_cfg pgtbl_cfg;
>> struct io_pgtable_ops *pgtbl_ops;
>> int (*finalise_stage_fn)(struct arm_smmu_domain *,
>> struct io_pgtable_cfg *);
>> +#endif
>> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>> struct arm_smmu_device *smmu = smmu_domain->smmu;
>> @@ -1575,6 +1693,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>> if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
>> smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
>> +#if 0
>> switch (smmu_domain->stage) {
>> case ARM_SMMU_DOMAIN_S1:
>> ias = VA_BITS;
>> @@ -1616,7 +1735,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
>> ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
>> if (ret < 0)
>> free_io_pgtable_ops(pgtbl_ops);
>> +#endif
>> + ret = arm_smmu_domain_finalise_s2(smmu_domain);
>> return ret;
>> }
>> @@ -1709,7 +1830,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>> } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>> ste->s1_cfg = &smmu_domain->s1_cfg;
>> ste->s2_cfg = NULL;
>> +#if 0 /*Xen: S1 configuratio not needed */
>
> What would be the issue to let this code uncommented?
>
Uncommented the code.
>> arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
>> +#endif
>> } else {
>> ste->s1_cfg = NULL;
>> ste->s2_cfg = &smmu_domain->s2_cfg;
>> @@ -1721,6 +1844,7 @@ out_unlock:
>> return ret;
>> }
>> > +#if 0
>
> /* Xen: ... */
>
>> static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
>> phys_addr_t paddr, size_t size, int prot)
>> {
>> @@ -1772,6 +1896,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>> put_device(dev);
>> return dev ? dev_get_drvdata(dev) : NULL;
>> }
>> +#endif
>> static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>> {
>> @@ -1782,8 +1907,9 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>> return sid < limit;
>> }
>> -
>
> Please don't remove newline.
Not the intent to remove the line. A few were left.
>
>> +#if 0
>> static struct iommu_ops arm_smmu_ops;
>> +#endif
>> static int arm_smmu_add_device(struct device *dev)
>> {
>> @@ -1791,9 +1917,12 @@ static int arm_smmu_add_device(struct device *dev)
>> struct arm_smmu_device *smmu;
>> struct arm_smmu_master_data *master;
>> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +#if 0 /*Xen: iommu_group is not needed */
>> struct iommu_group *group;
>> +#endif
>> - if (!fwspec || fwspec->ops != &arm_smmu_ops)
>> + /* Xen: fwspec->ops are not needed */
>> + if (!fwspec)
>> return -ENODEV;
>> /*
>> * We _can_ actually withstand dodgy bus code re-calling add_device()
>> @@ -1830,6 +1959,12 @@ static int arm_smmu_add_device(struct device *dev)
>> }
>> }
>> +#if 0
>> +/*
>> + * Xen: Do not need an iommu group as the stream data is carried by the SMMU
>> + * master device object
>> + */
>
> This is better to put before #if 0. So IDE will still show the comment even when #if 0 is fold.
>
Done.
>> +
>> group = iommu_group_get_for_dev(dev);
>> if (!IS_ERR(group)) {
>> iommu_group_put(group);
>> @@ -1837,8 +1972,16 @@ static int arm_smmu_add_device(struct device *dev)
>> }
>> return PTR_ERR_OR_ZERO(group);
>> +#endif
>> + return 0;
>> }
>> +/*
>> + * Xen: We can potentially support this function and destroy a device. This
>> + * will be relevant for PCI hotplug. So, will be implemented as needed after
>> + * passthrough support is available.
>> + */
>> +#if 0
>> static void arm_smmu_remove_device(struct device *dev)
>> {
>> struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> @@ -1974,7 +2117,7 @@ static struct iommu_ops arm_smmu_ops = {
>> .put_resv_regions = arm_smmu_put_resv_regions,
>> .pgsize_bitmap = -1UL, /* Restricted during device attach */
>> };
>> -
>
> Ditto for the newline. I know I didn't mention it in every place in the previous series. But I would have expected you to apply my comments everywhere.
>
>> +#endif
>> /* Probing and initialisation functions */
>> static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>> struct arm_smmu_queue *q,
>> @@ -1984,13 +2127,19 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>> {
>> size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
>> - q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
>> + /* The SMMU cache coherency property is always set. Since we are sharing the CPU translation tables
>
> /*
> * ...
>
>> + * just make a regular allocation.
>
> I am not sure to understand it. AFAIU, q is for the command queue. So how sharing the CPU translation tables will help here?
Old comment left from some initial code.
>
> Furthermore, I don't understand how you can say cache coherency property is always set? When I look at the driver, it seems to be able to handle non-coherent memory. So where do you modify that?
>
>> + */
>> + q->base = _xzalloc(qsz, sizeof(void *));
>> +
>> if (!q->base) {
>> dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
>> qsz);
>> return -ENOMEM;
>> }
>> + q->base_dma = virt_to_maddr(q->base);
>> +
>> q->prod_reg = arm_smmu_page1_fixup(prod_off, smmu);
>> q->cons_reg = arm_smmu_page1_fixup(cons_off, smmu);
>> q->ent_dwords = dwords;
>> @@ -2056,6 +2205,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>> u64 reg;
>> u32 size, l1size;
>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>> + u32 alignment;
>> /* Calculate the L1 size, capped to the SIDSIZE. */
>> size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
>> @@ -2069,14 +2219,17 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
>> size, smmu->sid_bits);
>> l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
>> - strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
>> - GFP_KERNEL | __GFP_ZERO);
>> + alignment = max_t(u32, cfg->num_l1_ents, 64);
>
> Same as before. I know I didn't go through the rest of the code. But you could have at least applied my comments on alignment here too. E.g where does the 64 come from?
>
> But, it looks like to me you want to create a function dmam_alloc_coherent that will do the allocation for you. This could be used in a few places within file driver...
I have removed this.
>
>> + strtab = _xzalloc(l1size, l1size);
>> +
>> if (!strtab) {
>> dev_err(smmu->dev,
>> "failed to allocate l1 stream table (%u bytes)\n",
>> size);
>> return -ENOMEM;
>> }
>> +
>> + cfg->strtab_dma = virt_to_maddr(strtab);
>> cfg->strtab = strtab;
>> /* Configure strtab_base_cfg for 2 levels */
>> @@ -2098,14 +2251,16 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
>> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
>> size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
>> - strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
>> - GFP_KERNEL | __GFP_ZERO);
>
> ... such as here.
>
>> + strtab = _xzalloc(size, size);
>
> Hmmm, _xzalloc contains the following assert:
>
> ASSERT((align & (align - 1)) == 0);
>
> How are you sure the size will always honor this check?
It should for L1 and L2 Stream table setp. But I will put a conditional in the wrapper if this does not hold true, I'll print a warning and set a compliant alignment.
>
>> +
>> if (!strtab) {
>> dev_err(smmu->dev,
>> "failed to allocate linear stream table (%u bytes)\n",
>> size);
>> return -ENOMEM;
>> }
>> +
>> + cfg->strtab_dma = virt_to_maddr(strtab);
>> cfg->strtab = strtab;
>> cfg->num_l1_ents = 1 << smmu->sid_bits;
>> @@ -2182,6 +2337,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
>> 1, ARM_SMMU_POLL_TIMEOUT_US);
>> }
>> +#if 0 /* Xen: There is no MSI support as yet */
>> static void arm_smmu_free_msis(void *data)
>> {
>> struct device *dev = data;
>> @@ -2247,36 +2403,39 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
>> /* Add callback to free MSIs on teardown */
>> devm_add_action(dev, arm_smmu_free_msis, dev);
>> }
>> +#endif
>> static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>> {
>> int irq, ret;
>> +#if 0 /*Xen: Cannot setup msis for now */
>> arm_smmu_setup_msis(smmu);
>> +#endif
>> /* Request interrupt lines */
>> irq = smmu->evtq.q.irq;
>> if (irq) {
>> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
>> - arm_smmu_evtq_thread,
>> - IRQF_ONESHOT,
>> - "arm-smmu-v3-evtq", smmu);
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>
> Why do you need to set the IRQ type? Can't it be found from the firmware tables?
Did not see this getting set in the code. I will recheck.
>
>> + ret = request_irq(irq, arm_smmu_evtq_thread,
>> + 0, "arm-smmu-v3-evtq", smmu);
>
> Please create a stub for devm_request_threaded_irq.
Done.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable evtq irq\n");
>> }
>> irq = smmu->cmdq.q.irq;
>> if (irq) {
>> - ret = devm_request_irq(smmu->dev, irq,
>> - arm_smmu_cmdq_sync_handler, 0,
>> - "arm-smmu-v3-cmdq-sync", smmu);
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>> + ret = request_irq(irq, arm_smmu_cmdq_sync_handler,
>> + 0, "arm-smmu-v3-cmdq-sync", smmu);
>
> Ditto.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
>> }
>> irq = smmu->gerr_irq;
>> if (irq) {
>> - ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>> + ret = request_irq(irq, arm_smmu_gerror_handler,
>> 0, "arm-smmu-v3-gerror", smmu);
>
> Ditto.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable gerror irq\n");
>> @@ -2284,12 +2443,13 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>> if (smmu->features & ARM_SMMU_FEAT_PRI) {
>> irq = smmu->priq.q.irq;
>> + irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
>> if (irq) {
>> - ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
>> - arm_smmu_priq_thread,
>> - IRQF_ONESHOT,
>> - "arm-smmu-v3-priq",
>> - smmu);
>> + ret = request_irq(irq,
>> + arm_smmu_priq_thread,
>> + 0,
>> + "arm-smmu-v3-priq",
>> + smmu);
>
> Ditto.
>
>> if (ret < 0)
>> dev_warn(smmu->dev,
>> "failed to enable priq irq\n");
>> @@ -2316,11 +2476,11 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
>> * Cavium ThunderX2 implementation doesn't not support unique
>> * irq lines. Use single irq line for all the SMMUv3 interrupts.
>> */
>> - ret = devm_request_threaded_irq(smmu->dev, irq,
>> - arm_smmu_combined_irq_handler,
>> - arm_smmu_combined_irq_thread,
>> - IRQF_ONESHOT,
>> - "arm-smmu-v3-combined-irq", smmu);
>> + ret = request_irq(irq,
>> + arm_smmu_combined_irq_handler,
>> + 0,
>> + "arm-smmu-v3-combined-irq",
>> + smmu);
>
> Ditto. And here a good example where I a stub is good. You set the IRQ type everywere but not for this one.
>
>> if (ret < 0)
>> dev_warn(smmu->dev, "failed to enable combined irq\n");
>> } else
>> @@ -2542,8 +2702,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>> smmu->features |= ARM_SMMU_FEAT_STALLS;
>> }
>> +#if 0/* Xen: Do not enable Stage 1 translations */
>
> This is just saying stage-1 is available. So why do you care so much to disable it? This is just adding more #if 0, we managed to get away in SMMUv1 by leaving the code as it.
This I actually need for now. If we decide to do something on the lines of what arm_smmu_domain_set_attr does then I will remove this. I will put in a comment.
>
>> +
>> if (reg & IDR0_S1P)
>> smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
>> +#endif
>> if (reg & IDR0_S2P)
>> smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
>> @@ -2616,10 +2779,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>> if (reg & IDR5_GRAN4K)
>> smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
>> +#if 0 /* Xen: SMMU ops do not have a pgsize_bitmap member for Xen */
>> if (arm_smmu_ops.pgsize_bitmap == -1UL)
>> arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
>> else
>> arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
>> +#endif
>> /* Output address size */
>> switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
>> @@ -2646,10 +2811,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>> smmu->oas = 48;
>> }
>> +#if 0 /* Xen: There is no support for DMA mask */
>
> Stub it?
Ok.
>
>> /* Set the DMA mask for our table walker */
>> if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
>> dev_warn(smmu->dev,
>> "failed to set DMA mask for table walker\n");
>> +#endif
>> smmu->ias = max(smmu->ias, smmu->oas);
>> @@ -2680,7 +2847,8 @@ static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>> struct device *dev = smmu->dev;
>> struct acpi_iort_node *node;
>> - node = *(struct acpi_iort_node **)dev_get_platdata(dev);
>> + /* Xen: Modification to get iort_node */
>> + node = (struct acpi_iort_node *)dev->acpi_node;
>> /* Retrieve SMMUv3 specific data */
>> iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
>> @@ -2703,7 +2871,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>> static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>> struct arm_smmu_device *smmu)
>> {
>> - struct device *dev = &pdev->dev;
>> + struct device *dev = pdev;
>> u32 cells;
>> int ret = -EINVAL;
>> @@ -2716,8 +2884,8 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>> parse_driver_options(smmu);
>> - if (of_dma_is_coherent(dev->of_node))
>> - smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>> + /* Xen: Set the COHERNECY feature */
>> + smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>
> This looks like completely wrong. You should only do it when the firmware tables say it is fine.
I will stub this till we add dt support.
>
>> return ret;
>> }
>> @@ -2734,9 +2902,11 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> {
>> int irq, ret;
>> struct resource *res;
>> +#if 0 /*Xen: Do not need to setup sysfs */
>> resource_size_t ioaddr;
>> +#endif
>> struct arm_smmu_device *smmu;
>> - struct device *dev = &pdev->dev;
>> + struct device *dev = pdev;/* Xen: dev is ignored */
>> bool bypass;
>> smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
>> @@ -2763,8 +2933,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> dev_err(dev, "MMIO region too small (%pr)\n", res);
>> return -EINVAL;
>> }
>> +#if 0 /*Xen: Do not need to setup sysfs */
>> ioaddr = res->start;
>> -
>
> Again the newline.
>
>> +#endif
>> smmu->base = devm_ioremap_resource(dev, res);
>> if (IS_ERR(smmu->base))
>> return PTR_ERR(smmu->base);
>> @@ -2802,13 +2973,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> return ret;
>> /* Record our private device structure */
>> +#if 0 /* Xen: SMMU is not treated a a platform device*/
>> platform_set_drvdata(pdev, smmu);
>> -
>
> Again the newline.
>
>> +#endif
>> /* Reset the device */
>> ret = arm_smmu_device_reset(smmu, bypass);
>> if (ret)
>> return ret;
>> +/* Xen: Not creating an IOMMU device list for Xen */
>> +#if 0
>> /* And we're up. Go go go! */
>> ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
>> "smmu3.%pa", &ioaddr);
>> @@ -2844,9 +3018,18 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>> if (ret)
>> return ret;
>> }
>> +#endif
>> + /*
>> + * Xen: Keep a list of all probed devices. This will be used to query
>> + * the smmu devices based on the fwnode.
>> + */
>> + INIT_LIST_HEAD(&smmu->devices);
>> + spin_lock(&arm_smmu_devices_lock);
>> + list_add(&smmu->devices, &arm_smmu_devices);
>> + spin_unlock(&arm_smmu_devices_lock);
>> return 0;
>> }
>> -
>
> Again the newline removed and /* Xen ... */
>> +#if 0
>> static int arm_smmu_device_remove(struct platform_device *pdev)
>> {
>> struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
>> @@ -2860,6 +3043,10 @@ static void arm_smmu_device_shutdown(struct platform_device *pdev)
>> {
>> arm_smmu_device_remove(pdev);
>> }
>> +#endif
>> +
>> +#define MODULE_DEVICE_TABLE(type, name)
>> +#define of_device_id dt_device_match
>
> That should be define on top.
>
Ok.
Thanks,
Sameer
>> static const struct of_device_id arm_smmu_of_match[] = {
>> { .compatible = "arm,smmu-v3", },
>> @@ -2867,6 +3054,7 @@ static const struct of_device_id arm_smmu_of_match[] = {
>> };
>> MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
>> +#if 0
>> static struct platform_driver arm_smmu_driver = {
>> .driver = {
>> .name = "arm-smmu-v3",
>> @@ -2883,3 +3071,318 @@ IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
>> MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
>> MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
>> MODULE_LICENSE("GPL v2");
>> +#endif
>> +
>> +/***** Start of Xen specific code *****/
>> +
>> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>> +{
>> + struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
>> + struct iommu_domain *cfg;
>> +
>> + spin_lock(&smmu_domain->lock);
>> + list_for_each_entry(cfg, &smmu_domain->iommu_domains, list) {
>> + /*
>> + * Only invalidate the context when SMMU is present.
>> + * This is because the context initialization is delayed
>> + * until a master has been added.
>> + */
>> + if (unlikely(!ACCESS_ONCE(cfg->priv->smmu)))
>> + continue;
>> + arm_smmu_tlb_inv_context(cfg->priv);
>> + }
>> + spin_unlock(&smmu_domain->lock);
>> + return 0;
>> +}
>> +
>> +static int __must_check arm_smmu_iotlb_flush(struct domain *d,
>> + unsigned long gfn,
>> + unsigned int page_count)
>> +{
>> + return arm_smmu_iotlb_flush_all(d);
>> +}
>> +
>> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
>> + struct device *dev)
>> +{
>> + struct iommu_domain *domain;
>> + struct arm_smmu_xen_domain *xen_domain;
>> + struct arm_smmu_device *smmu;
>> + struct arm_smmu_domain *smmu_domain;
>> +
>> + xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + smmu = arm_smmu_get_by_fwnode(dev->iommu_fwspec->iommu_fwnode);
>> + if (!smmu)
>> + return NULL;
>> +
>> + /*
>> + * Loop through the &xen_domain->contexts to locate a context
>> + * assigned to this SMMU
>> + */
>> + list_for_each_entry(domain, &xen_domain->iommu_domains, list) {
>> + smmu_domain = to_smmu_domain(domain);
>> + if (smmu_domain->smmu == smmu)
>> + return domain;
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *domain)
>> +{
>> + list_del(&domain->list);
>> + arm_smmu_domain_free(domain);
>> +}
>> +
>> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
>> + struct device *dev, u32 flag)
>> +{
>> + int ret = 0;
>> + struct iommu_domain *domain;
>> + struct arm_smmu_xen_domain *xen_domain;
>> + struct arm_smmu_domain *arm_smmu;
>> +
>> + xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + if (!dev->archdata.iommu) {
>> + dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
>> + if (!dev->archdata.iommu)
>> + return -ENOMEM;
>> + }
>> +
>> + ret = arm_smmu_add_device(dev);
>> + if (ret)
>> + return ret;
>> +
>> + spin_lock(&xen_domain->lock);
>> +
>> + /*
>> + * Check to see if an iommu_domain already exists for this xen domain
>> + * under the same SMMU
>> + */
>> + domain = arm_smmu_get_domain(d, dev);
>> + if (!domain) {
>> +
>> + domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
>> + if (!domain) {
>> + ret = -ENOMEM;
>> + goto out;
>> + }
>> +
>> + arm_smmu = to_smmu_domain(domain);
>> + arm_smmu->s2_cfg.domain = d;
>> +
>> + /* Chain the new context to the domain */
>> + list_add(&domain->list, &xen_domain->iommu_domains);
>> +
>> + }
>> +
>> + ret = arm_smmu_attach_dev(domain, dev);
>> + if (ret) {
>> + if (domain->ref.counter == 0)
>> + arm_smmu_destroy_iommu_domain(domain);
>> + } else {
>> + atomic_inc(&domain->ref);
>> + }
>> +
>> +out:
>> + spin_unlock(&xen_domain->lock);
>> + return ret;
>> +}
>> +
>> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
>> +{
>> + struct iommu_domain *domain = arm_smmu_get_domain(d, dev);
>> + struct arm_smmu_xen_domain *xen_domain;
>> + struct arm_smmu_domain *arm_smmu = to_smmu_domain(domain);
>> +
>> + xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
>> + dev_err(dev, " not attached to domain %d\n", d->domain_id);
>> + return -ESRCH;
>> + }
>> +
>> + spin_lock(&xen_domain->lock);
>> +
>> + arm_smmu_detach_dev(dev);
>> + atomic_dec(&domain->ref);
>> +
>> + if (domain->ref.counter == 0)
>> + arm_smmu_destroy_iommu_domain(domain);
>> +
>> + spin_unlock(&xen_domain->lock);
>> +
>> +
>> +
>> + return 0;
>> +}
>> +
>> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
>> + u8 devfn, struct device *dev)
>> +{
>> + int ret = 0;
>> +
>> + /* Don't allow remapping on other domain than hwdom */
>> + if (t && t != hardware_domain)
>> + return -EPERM;
>> +
>> + if (t == s)
>> + return 0;
>> +
>> + ret = arm_smmu_deassign_dev(s, dev);
>> + if (ret)
>> + return ret;
>> +
>> + if (t) {
>> + /* No flags are defined for ARM. */
>> + ret = arm_smmu_assign_dev(t, devfn, dev, 0);
>> + if (ret)
>> + return ret;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int arm_smmu_iommu_domain_init(struct domain *d)
>> +{
>> + struct arm_smmu_xen_domain *xen_domain;
>> +
>> + xen_domain = xzalloc(struct arm_smmu_xen_domain);
>> + if (!xen_domain)
>> + return -ENOMEM;
>> +
>> + spin_lock_init(&xen_domain->lock);
>> + INIT_LIST_HEAD(&xen_domain->iommu_domains);
>> +
>> + dom_iommu(d)->arch.priv = xen_domain;
>> +
>> + return 0;
>> +}
>> +
>> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
>> +{
>> +}
>> +
>> +static void arm_smmu_iommu_domain_teardown(struct domain *d)
>> +{
>> + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
>> +
>> + ASSERT(list_empty(&xen_domain->iommu_domains));
>> + xfree(xen_domain);
>> +}
>> +
>> +static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
>> + unsigned long mfn, unsigned int flags)
>> +{
>> + p2m_type_t t;
>> +
>> + /*
>> + * Grant mappings can be used for DMA requests. The dev_bus_addr
>> + * returned by the hypercall is the MFN (not the IPA). For device
>> + * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
>> + * p2m to allow DMA request to work.
>> + * This is only valid when the domain is directed mapped. Hence this
>> + * function should only be used by gnttab code with gfn == mfn.
>> + */
>> + BUG_ON(!is_domain_direct_mapped(d));
>> + BUG_ON(mfn != gfn);
>> +
>> + /* We only support readable and writable flags */
>> + if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
>> + return -EINVAL;
>> +
>> + t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
>> +
>> + /*
>> + * The function guest_physmap_add_entry replaces the current mapping
>> + * if there is already one...
>> + */
>> + return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
>> +}
>> +
>> +static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
>> +{
>> + /*
>> + * This function should only be used by gnttab code when the domain
>> + * is direct mapped
>> + */
>> + if (!is_domain_direct_mapped(d))
>> + return -EINVAL;
>> +
>> + return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
>> +}
>> +
>> +static const struct iommu_ops arm_smmu_iommu_ops = {
>> + .init = arm_smmu_iommu_domain_init,
>> + .hwdom_init = arm_smmu_iommu_hwdom_init,
>> + .teardown = arm_smmu_iommu_domain_teardown,
>> + .iotlb_flush = arm_smmu_iotlb_flush,
>> + .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>> + .assign_device = arm_smmu_assign_dev,
>> + .reassign_device = arm_smmu_reassign_dev,
>> + .map_page = arm_smmu_map_page,
>> + .unmap_page = arm_smmu_unmap_page,
>> +};
>> +
>> +static
>> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
>> +{
>> + struct arm_smmu_device *smmu = NULL;
>> +
>> + spin_lock(&arm_smmu_devices_lock);
>> + list_for_each_entry(smmu, &arm_smmu_devices, devices) {
>> + if (smmu->dev->fwnode == fwnode)
>> + break;
>> + }
>> + spin_unlock(&arm_smmu_devices_lock);
>> +
>> + return smmu;
>> +}
>> +
>> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>> + const void *data)
>> +{
>> + int rc;
>> +
>> + /*
>> + * Even if the device can't be initialized, we don't want to
>> + * give the SMMU device to dom0.
>> + */
>> + dt_device_set_used_by(dev, DOMID_XEN);
>> +
>> + rc = arm_smmu_device_probe(dt_to_dev(dev));
>> + if (rc)
>> + return rc;
>> +
>> + iommu_set_ops(&arm_smmu_iommu_ops);
>> +
>> + return 0;
>> +}
>> +
>> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>> + .dt_match = arm_smmu_of_match,
>> + .init = arm_smmu_dt_init,
>> +DT_DEVICE_END
>> +
>> +#ifdef CONFIG_ACPI
>> +/* Set up the IOMMU */
>> +static int __init arm_smmu_acpi_init(const void *data)
>> +{
>> + int rc;
>> + rc = arm_smmu_device_probe((struct device *)data);
>> +
>> + if (rc)
>> + return rc;
>> +
>> + iommu_set_ops(&arm_smmu_iommu_ops);
>> + return 0;
>> +}
>> +
>> +ACPI_DEVICE_START(asmmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>> + .class_type = ACPI_IORT_NODE_SMMU_V3,
>> + .init = arm_smmu_acpi_init,
>> +ACPI_DEVICE_END
>> +
>> +#endif
>>
> Cheers,
>
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2017-12-16 6:05 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-05 3:59 [RFC v3 0/4] SMMUv3 Driver Sameer Goel
2017-12-05 3:59 ` [RFC v3 1/4] Port WARN_ON_ONCE() from Linux Sameer Goel
2017-12-05 9:18 ` Jan Beulich
2017-12-05 3:59 ` [RFC v3 2/4] xen/linux_compat: Add a Linux compat header Sameer Goel
2017-12-05 9:20 ` Jan Beulich
2017-12-05 11:37 ` Julien Grall
2017-12-05 12:31 ` Julien Grall
2017-12-15 22:32 ` Goel, Sameer
2017-12-15 22:39 ` Julien Grall
2017-12-05 3:59 ` [RFC v3 3/4] Add verbatim copy of arm-smmu-v3.c from Linux Sameer Goel
2017-12-05 3:59 ` [RFC v3 4/4] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver Sameer Goel
2017-12-05 14:17 ` Julien Grall
2017-12-05 23:26 ` Goel, Sameer
2017-12-06 9:55 ` Julien Grall
2017-12-06 10:01 ` Julien Grall
2017-12-15 22:45 ` Goel, Sameer
2017-12-16 6:05 ` Goel, Sameer
2017-12-12 8:09 ` Manish Jaggi
2017-12-12 15:34 ` Goel, Sameer
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.