All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v03 00/10] arm: introduce remoteprocessor iommu module
@ 2014-09-02 15:46 Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 01/10] xen: implement guest_physmap_pin_range Andrii Tseglytskyi
                   ` (9 more replies)
  0 siblings, 10 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The following patch series introduces IOMMU translation
framework for remote processors. Remote processors are
typically used for graphic rendering (GPUs) and
high quality video decoding (IPUs). They are typically
installed on such multimedia SoCs as OMAP4 / OMAP5.

As soon as remoteprocessor MMU typically works with
pagetables filled by physical addresses, which are
allocated by domU kernel, it is almost impossible to
use them under Xen - intermediate physical addresses
allocated by kernel, need to be translated to machine
addresses which are managed by Xen.

Changes in v03
- Rebased to latest Xen master branch
- XSM security check is added for domain, which performs
  remoteproc MMU access
- Added a possibility to pin pfn to mfn. This functionality
  was introduced some time ago by Stefano:
  http://marc.info/?l=xen-devel&m=138029864707973
- ioremap_nocache() calls changed to appropriate map_domain_page()
  calls
- remoteproc iommu module moved to
  src: xen/arch/arm/remoteproc/
  hdr: xen/include/asm-arm/
- Other review comments were addressed

Andrii Tseglytskyi (9):
  domctl: introduce access_remote_pagetable call
  xsm: arm: create domU_rpc_t security label
  arm: introduce remoteprocessor iommu module
  arm: omap: introduce iommu translation for IPU remoteproc
  arm: omap: introduce iommu translation for GPU remoteproc
  arm: introduce remoteproc_mmu_translate_pagetable mem subops call
  arm: add trap for remoteproc mmio accesses
  arm: omap: introduce print pagetable function for IPU remoteproc
  arm: omap: introduce print pagetable function for GPU remoteproc

Stefano Stabellini (1):
  xen: implement guest_physmap_pin_range

 tools/flask/policy/policy/modules/xen/xen.te |  14 +
 xen/arch/arm/Makefile                        |   1 +
 xen/arch/arm/Rules.mk                        |   1 +
 xen/arch/arm/mm.c                            |   8 +
 xen/arch/arm/p2m.c                           |  82 ++++
 xen/arch/arm/remoteproc/Makefile             |   2 +
 xen/arch/arm/remoteproc/omap_iommu.c         | 559 +++++++++++++++++++++++++++
 xen/arch/arm/remoteproc/remoteproc_iommu.c   | 459 ++++++++++++++++++++++
 xen/common/domain.c                          |   7 +
 xen/include/asm-arm/mm.h                     |  11 +
 xen/include/asm-arm/remoteproc_iommu.h       |  88 +++++
 xen/include/asm-x86/p2m.h                    |  20 +
 xen/include/public/domctl.h                  |   1 +
 xen/include/public/memory.h                  |  14 +-
 xen/xsm/flask/hooks.c                        |   3 +
 xen/xsm/flask/policy/access_vectors          |   2 +
 16 files changed, 1271 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/remoteproc/Makefile
 create mode 100644 xen/arch/arm/remoteproc/omap_iommu.c
 create mode 100644 xen/arch/arm/remoteproc/remoteproc_iommu.c
 create mode 100644 xen/include/asm-arm/remoteproc_iommu.h

-- 
1.9.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v03 01/10] xen: implement guest_physmap_pin_range
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-03  9:43   ` Jan Beulich
  2014-09-11  1:12   ` Julien Grall
  2014-09-02 15:46 ` [PATCH v03 02/10] domctl: introduce access_remote_pagetable call Andrii Tseglytskyi
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

guest_physmap_pin_range pins a range of guest pages so that their p2m
mappings won't be changed.
guest_physmap_unpin_range unpins the previously pinned pages.
The pinning is done using a new count_info flag.

Provide empty stubs for x86.

Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>:
- rebased to latest master branch
- added guest_physmap_pinned_range() API
- pass mfn instead of gmfn

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/p2m.c        | 82 +++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/mm.h  | 11 +++++++
 xen/include/asm-x86/p2m.h | 20 ++++++++++++
 3 files changed, 113 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 46ec01c..b3a16d3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -214,6 +214,79 @@ err:
     return maddr;
 }
 
+int guest_physmap_pin_range(struct domain *d,
+                            xen_pfn_t mfn,
+                            unsigned int order)
+{
+    int i;
+    struct page_info *page;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        if ( !mfn_valid(mfn + i) )
+            return -EINVAL;
+
+        page = mfn_to_page(mfn + i);
+        if ( !page )
+            return -EINVAL;
+
+        if ( !get_page_type(page, PGT_writable_page) )
+            return -EINVAL;
+
+        if ( test_and_set_bit(_PGC_p2m_pinned, &page->count_info) )
+            return -EBUSY;
+    }
+    return 0;
+}
+
+int guest_physmap_unpin_range(struct domain *d,
+                              xen_pfn_t mfn,
+                              unsigned int order)
+{
+    int i;
+    struct page_info *page;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        if ( !mfn_valid(mfn + i) )
+            return -EINVAL;
+
+        page = mfn_to_page(mfn + i);
+        if ( !page )
+            return -EINVAL;
+
+        if ( !test_and_clear_bit(_PGC_p2m_pinned, &page->count_info) )
+            return -EINVAL;
+    }
+    return 0;
+}
+
+int guest_physmap_pinned_range(struct domain *d,
+                               xen_pfn_t mfn,
+                               unsigned int order)
+{
+    int i, pins = 0;
+    struct page_info *page;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        if ( !mfn_valid(mfn + i) )
+            return 0;
+
+        page = mfn_to_page(mfn + i);
+        if ( !page )
+            return 0;
+
+        if ( test_bit(_PGC_p2m_pinned, &page->count_info) )
+            pins++;
+    }
+
+    if ( i && (i == pins) )
+        return 1;
+
+    return 0;
+}
+
 int guest_physmap_mark_populate_on_demand(struct domain *d,
                                           unsigned long gfn,
                                           unsigned int order)
@@ -478,10 +551,18 @@ static int apply_one_level(struct domain *d,
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte;
     const lpae_t orig_pte = *entry;
+    struct page_info *page = NULL;
     int rc;
 
     BUG_ON(level > 3);
 
+    if ( guest_physmap_pinned_range(d, orig_pte.p2m.base, 0) )
+    {
+        gdprintk(XENLOG_WARNING, "cannot change p2m mapping for paddr=%"PRIpaddr
+                 " domid=%d, the page is pinned count_info %lu\n", *addr, d->domain_id, page->count_info);
+        return -EINVAL;
+    }
+
     switch ( op )
     {
     case ALLOCATE:
@@ -819,6 +900,7 @@ static int apply_p2m_changes(struct domain *d,
                               &addr, &maddr, &flush,
                               mattr, t);
         if ( ret < 0 ) { rc = ret ; goto out; }
+
         /* L3 had better have done something! We cannot descend any further */
         BUG_ON(ret == P2M_ONE_DESCEND);
         count += ret;
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 9fa80a4..f6d9e6b 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -92,6 +92,10 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
+/* The page belongs to a guest and it has been pinned. */
+#define _PGC_p2m_pinned   PG_shift(3)
+#define PGC_p2m_pinned    PG_mask(1, 3)
+
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
@@ -340,6 +344,13 @@ void free_init_memory(void);
 int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
                                           unsigned int order);
 
+int guest_physmap_pin_range(struct domain *d, paddr_t mfn,
+                            unsigned int order);
+int guest_physmap_unpin_range(struct domain *d, paddr_t mfn,
+                              unsigned int order);
+int guest_physmap_pinned_range(struct domain *d, paddr_t mfn,
+                               unsigned int order);
+
 extern void put_page_type(struct page_info *page);
 static inline void put_page_and_type(struct page_info *page)
 {
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 39f235d..c7f12b1 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -514,6 +514,26 @@ void guest_physmap_remove_page(struct domain *d,
 int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
                                           unsigned int order);
 
+static inline int guest_physmap_pin_range(struct domain *d,
+                                          paddr_t mfn,
+                                          unsigned int order)
+{
+    return -ENOSYS;
+}
+static inline int guest_physmap_unpin_range(struct domain *d,
+                              paddr_t mfn,
+                              unsigned int order)
+{
+    return -ENOSYS;
+}
+
+static inline int guest_physmap_pinned_range(struct domain *d,
+                              paddr_t mfn,
+                              unsigned int order)
+{
+    return -ENOSYS;
+}
+
 /* Change types across all p2m entries in a domain */
 void p2m_change_entry_type_global(struct domain *d, 
                                   p2m_type_t ot, p2m_type_t nt);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 02/10] domctl: introduce access_remote_pagetable call
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 01/10] xen: implement guest_physmap_pin_range Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-03  9:46   ` Jan Beulich
  2014-09-02 15:46 ` [PATCH v03 03/10] xsm: arm: create domU_rpc_t security label Andrii Tseglytskyi
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The following call is designed to check is domain
can access MMU of remoteprocessor, such as IPU or GPU.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/include/public/domctl.h         | 1 +
 xen/xsm/flask/hooks.c               | 3 +++
 xen/xsm/flask/policy/access_vectors | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 8c4d4c5..eedf933 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1067,6 +1067,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_configure_domain              74
 #define XEN_DOMCTL_dtdev_op                      75
 #define XEN_DOMCTL_assign_dt_device              76
+#define XEN_DOMCTL_access_remote_pagetable       77
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 8a5ff7c..897b53f 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -718,6 +718,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_configure_domain:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CONFIGURE_DOMAIN);
 
+    case XEN_DOMCTL_access_remote_pagetable:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__ACCESS_REMOTE_PAGETABLE);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 33eec66..1a9aff1 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -202,6 +202,8 @@ class domain2
     create_hardware_domain
 # XEN_DOMCTL_configure_domain
     configure_domain
+# XEN_DOMCTL_access_remote_pagetable
+    access_remote_pagetable
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 03/10] xsm: arm: create domU_rpc_t security label
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 01/10] xen: implement guest_physmap_pin_range Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 02/10] domctl: introduce access_remote_pagetable call Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 04/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The following security label will be used for domU, which
can access to MMU of remoteprocessors, such as IPU or GPU.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 tools/flask/policy/policy/modules/xen/xen.te | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 999b351..d6184d7 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -110,6 +110,7 @@ admin_device(dom0_t, irq_t)
 admin_device(dom0_t, ioport_t)
 admin_device(dom0_t, iomem_t)
 admin_device(domU_t, iomem_t)
+admin_device(domU_rpc_t, iomem_t)
 
 domain_comms(dom0_t, dom0_t)
 
@@ -169,6 +170,18 @@ manage_domain(dom0_t, nomigrate_t)
 domain_comms(dom0_t, nomigrate_t)
 domain_self_comms(nomigrate_t)
 
+# declare domain which handles remoteprocessor
+declare_domain(domU_rpc_t)
+domain_self_comms(domU_rpc_t)
+create_domain(dom0_t, domU_rpc_t)
+manage_domain(dom0_t, domU_rpc_t)
+domain_comms(dom0_t, domU_rpc_t)
+domain_comms(domU_rpc_t, domU_rpc_t)
+domain_self_comms(domU_rpc_t)
+allow domU_rpc_t domU_rpc_t_self:resource add;
+allow dom0_t domU_rpc_t:domain2 access_remote_pagetable;
+allow domU_rpc_t domU_rpc_t_self:domain2 access_remote_pagetable;
+
 ###############################################################################
 #
 # Device delegation
@@ -181,6 +194,7 @@ admin_device(dom0_t, nic_dev_t)
 use_device(domU_t, nic_dev_t)
 
 delegate_devices(dom0_t, domU_t)
+delegate_devices(dom0_t, domU_rpc_t)
 
 ###############################################################################
 #
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 04/10] arm: introduce remoteprocessor iommu module
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (2 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 03/10] xsm: arm: create domU_rpc_t security label Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-11  0:41   ` Julien Grall
  2014-09-02 15:46 ` [PATCH v03 05/10] arm: omap: introduce iommu translation for IPU remoteproc Andrii Tseglytskyi
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

Remoteprocessor iommu module is used to handle remote
(external) processors memory management units.
Remote processors are typically used for graphic
rendering (GPUs) and high quality video decoding (IPUs).
They are typically installed on such multimedia SoCs
as OMAP4 / OMAP5.

As soon as remoteprocessor MMU typically works with
pagetables filled by physical addresses, which are
allocated by domU kernel, it is almost impossible to
use them under Xen, intermediate physical addresses
allocated by kernel, need to be translated to machine
addresses.

This patch introduces a simple framework to perform
pfn -> mfn translation for external MMUs.
It introduces basic data structures and algorithms
needed for translation.

Typically, when MMU is configured, some it registers
are updated by new values. Introduced frameworks
uses traps as starting point of remoteproc MMUs
pagetables translation.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/Makefile                      |   1 +
 xen/arch/arm/Rules.mk                      |   1 +
 xen/arch/arm/remoteproc/Makefile           |   1 +
 xen/arch/arm/remoteproc/remoteproc_iommu.c | 426 +++++++++++++++++++++++++++++
 xen/include/asm-arm/remoteproc_iommu.h     |  82 ++++++
 5 files changed, 511 insertions(+)
 create mode 100644 xen/arch/arm/remoteproc/Makefile
 create mode 100644 xen/arch/arm/remoteproc/remoteproc_iommu.c
 create mode 100644 xen/include/asm-arm/remoteproc_iommu.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index c13206f..300db2f 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,5 +1,6 @@
 subdir-$(arm32) += arm32
 subdir-$(arm64) += arm64
+subdir-$(HAS_REMOTEPROC) += remoteproc
 subdir-y += platforms
 
 obj-$(EARLY_PRINTK) += early_printk.o
diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 8658176..b178eab 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -110,6 +110,7 @@ CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_UART_BASE_ADDRESS=$(EARLY_UART_BASE_ADDRESS)
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_UART_REG_SHIFT=$(EARLY_UART_REG_SHIFT)
+CFLAGS-$(HAS_REMOTEPROC)   += -DHAS_REMOTEPROC
 
 else # !debug
 
diff --git a/xen/arch/arm/remoteproc/Makefile b/xen/arch/arm/remoteproc/Makefile
new file mode 100644
index 0000000..0b0ee0e
--- /dev/null
+++ b/xen/arch/arm/remoteproc/Makefile
@@ -0,0 +1 @@
+obj-y += remoteproc_iommu.o
diff --git a/xen/arch/arm/remoteproc/remoteproc_iommu.c b/xen/arch/arm/remoteproc/remoteproc_iommu.c
new file mode 100644
index 0000000..e73711a
--- /dev/null
+++ b/xen/arch/arm/remoteproc/remoteproc_iommu.c
@@ -0,0 +1,426 @@
+/*
+ * xen/arch/arm/remoteproc_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2014 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/domain_page.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/stdbool.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <asm/p2m.h>
+#include <xsm/xsm.h>
+
+#include <asm/remoteproc_iommu.h>
+
+static struct mmu_info *mmu_list[] = {
+};
+
+#define mmu_for_each(pfunc, data)                       \
+({                                                      \
+    u32 __i;                                            \
+    int __res = 0;                                      \
+                                                        \
+    for ( __i = 0; __i < ARRAY_SIZE(mmu_list); __i++ )  \
+    {                                                   \
+        __res = pfunc(mmu_list[__i], data);             \
+        if ( __res )                                    \
+            break;                                      \
+    }                                                   \
+    __res;                                              \
+})
+
+static bool mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
+{
+    if ( (addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)) )
+        return true;
+
+    return false;
+}
+
+static inline struct mmu_info *mmu_lookup(paddr_t addr)
+{
+    u32 i;
+
+    /* enumerate all registered MMU's and check is address in range */
+    for ( i = 0; i < ARRAY_SIZE(mmu_list); i++ )
+    {
+        /* check is in address belongs to the MMU */
+        if ( !mmu_check_mem_range(mmu_list[i], addr) )
+            continue;
+
+        /* check is MMU is properly initialized */
+        if ( !mmu_list[i]->mem_map )
+            continue;
+
+        return mmu_list[i];
+    }
+
+    return NULL;
+}
+
+struct mmu_pagetable *mmu_pagetable_lookup(struct mmu_info *mmu, paddr_t addr, bool is_maddr)
+{
+    struct mmu_pagetable *pgt;
+    paddr_t pgt_addr;
+
+    list_for_each_entry(pgt, &mmu->pagetables_list, link_node)
+    {
+        if ( is_maddr )
+            pgt_addr = pgt->maddr;
+        else
+            pgt_addr = pgt->paddr;
+
+        if ( pgt_addr == addr )
+            return pgt;
+    }
+
+    return NULL;
+}
+
+static struct mmu_pagetable *mmu_alloc_pagetable(struct mmu_info *mmu, paddr_t paddr)
+{
+    struct mmu_pagetable *pgt;
+    u32 pgt_size = MMU_PGD_TABLE_SIZE(mmu);
+
+    pgt = xzalloc_bytes(sizeof(struct mmu_pagetable));
+    if ( !pgt )
+    {
+        pr_mmu(mmu, "failed to alloc pagetable structure");
+        return NULL;
+    }
+
+    /* allocate pagetable managed by hypervisor */
+    pgt->hyp_pagetable = xzalloc_bytes(pgt_size);
+    if ( !pgt->hyp_pagetable )
+    {
+        pr_mmu(mmu, "failed to alloc private hyp_pagetable");
+        return NULL;
+    }
+
+    /* alocate pagetable for ipa storing */
+    pgt->kern_pagetable = xzalloc_bytes(pgt_size);
+    if ( !pgt->kern_pagetable )
+    {
+        pr_mmu(mmu, "failed to alloc private kern_pagetable");
+        return NULL;
+    }
+
+    pr_mmu(mmu, "private pagetables for paddr 0x%"PRIpaddr" size %u bytes (main 0x%"PRIpaddr", temp 0x%"PRIpaddr")",
+           paddr, pgt_size, __pa(pgt->hyp_pagetable), __pa(pgt->kern_pagetable));
+
+    pgt->paddr = paddr;
+
+    list_add(&pgt->link_node, &mmu->pagetables_list);
+
+    return pgt;
+}
+
+static paddr_t mmu_translate_pagetable(struct mmu_info *mmu, paddr_t paddr)
+{
+    struct mmu_pagetable *pgt;
+    int res;
+
+    /* sanity check */
+    if ( !mmu->copy_pagetable_pfunc || !mmu->translate_pfunc )
+    {
+        pr_mmu(mmu, "translation callbacks are not defined");
+        return INVALID_PADDR;
+    }
+
+    /* lookup using machine address first */
+    pgt = mmu_pagetable_lookup(mmu, paddr, true);
+    if ( !pgt )
+    {
+        /* lookup using kernel physical address */
+        pgt = mmu_pagetable_lookup(mmu, paddr, false);
+        if ( !pgt )
+        {
+            /* if pagetable doesn't exists in lookup list - allocate it */
+            pgt = mmu_alloc_pagetable(mmu, paddr);
+            if ( !pgt )
+                return INVALID_PADDR;
+        }
+    }
+
+    pgt->maddr = INVALID_PADDR;
+
+    /* copy pagetable from domain to hypervisor */
+    res = mmu->copy_pagetable_pfunc(mmu, pgt);
+	if ( res )
+        return res;
+
+    /* translate pagetable */
+    pgt->maddr = mmu->translate_pfunc(mmu, pgt);
+    return pgt->maddr;
+}
+
+static paddr_t mmu_trap_translate_pagetable(struct mmu_info *mmu, mmio_info_t *info)
+{
+    register_t *reg;
+    bool valid_trap = false;
+    paddr_t paddr;
+    u32 i;
+
+    reg = select_user_reg(guest_cpu_user_regs(), info->dabt.reg);
+
+    paddr = *reg;
+    if ( !paddr )
+        return INVALID_PADDR;
+
+    /* check is the register is a valid TTB register */
+    for ( i = 0; i < mmu->num_traps; i++ )
+    {
+        if ( mmu->trap_offsets[i] == (info->gpa - mmu->mem_start) )
+        {
+            valid_trap = true;
+            break;
+        }
+    }
+
+    if ( !valid_trap )
+        return INVALID_PADDR;
+
+    return mmu_translate_pagetable(mmu, paddr);
+}
+
+paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
+                                                struct mmu_pagetable *pgt,
+                                                paddr_t maddr, paddr_t hyp_addr)
+{
+    u32 *pte_table = NULL, *hyp_pte_table = NULL;
+    u32 i;
+
+    /* map second level translation table */
+    pte_table = map_domain_page(maddr>>PAGE_SHIFT);
+    if ( !pte_table )
+    {
+        pr_mmu(mmu, "failed to map pte table");
+        return INVALID_PADDR;
+    }
+
+    clean_and_invalidate_xen_dcache_va_range(pte_table, PAGE_SIZE);
+    /* allocate new second level pagetable once */
+    if ( 0 == hyp_addr )
+    {
+        hyp_pte_table = xzalloc_bytes(PAGE_SIZE);
+        if ( !hyp_pte_table )
+        {
+            pr_mmu(mmu, "failed to alloc new pte table");
+            return INVALID_PADDR;
+        }
+    }
+    else
+    {
+        hyp_pte_table = __va(hyp_addr & PAGE_MASK);
+    }
+
+    /* 2-nd level translation */
+    for ( i = 0; i < MMU_PTRS_PER_PTE(mmu); i++ )
+    {
+        paddr_t pt_maddr, pt_paddr, pt_flags;
+        u32 pt_mask = MMU_SECTION_MASK(mmu->pg_data->pte_shift);
+        int res;
+
+        if ( !pte_table[i] )
+        {
+            /* handle the case when page was removed */
+            if ( unlikely(hyp_pte_table[i]) )
+            {
+                guest_physmap_unpin_range(current->domain,
+                            (hyp_pte_table[i] & pt_mask) >> PAGE_SHIFT, 0);
+                hyp_pte_table[i] = 0;
+            }
+
+            continue;
+        }
+
+        pt_paddr = pte_table[i] & pt_mask;
+        pt_flags = pte_table[i] & ~pt_mask;
+        pt_maddr = p2m_lookup(current->domain, pt_paddr, NULL);
+
+        if ( INVALID_PADDR == pt_maddr )
+        {
+            pr_mmu(mmu, "can't translate pfn 0x%"PRIpaddr"", pt_paddr);
+            unmap_domain_page(pte_table);
+            return INVALID_PADDR;
+        }
+
+        if ( !guest_physmap_pinned_range(current->domain, pt_maddr >> PAGE_SHIFT, 0) )
+        {
+            res = guest_physmap_pin_range(current->domain, pt_maddr >> PAGE_SHIFT, 0);
+            if ( res )
+            {
+                pr_mmu(mmu, "can't pin page pfn 0x%"PRIpaddr" mfn 0x%"PRIpaddr" res %d",
+                       pt_paddr, pt_maddr, res);
+                unmap_domain_page(pte_table);
+                return INVALID_PADDR;
+            }
+        }
+
+        hyp_pte_table[i] = pt_maddr | pt_flags;
+        pgt->page_counter++;
+    }
+
+    unmap_domain_page(pte_table);
+
+    clean_and_invalidate_xen_dcache_va_range(hyp_pte_table, MMU_PTE_TABLE_SIZE(mmu));
+    return __pa(hyp_pte_table);
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+    ASSERT(mmu);
+    ASSERT(!mmu->mem_map);
+
+    INIT_LIST_HEAD(&mmu->pagetables_list);
+
+    /* map MMU memory */
+    mmu->mem_map = ioremap_nocache(mmu->mem_start, mmu->mem_size);
+    if ( !mmu->mem_map )
+    {
+        pr_mmu(mmu, "failed to map memory");
+        return -EINVAL;
+    }
+
+    pr_mmu(mmu, "memory map = 0x%p", mmu->mem_map);
+
+    spin_lock_init(&mmu->lock);
+
+    return 0;
+}
+
+static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
+{
+    struct mmu_info *mmu = NULL;
+    unsigned long flags;
+    register_t *reg;
+
+    reg = select_user_reg(guest_cpu_user_regs(), info->dabt.reg);
+
+    mmu = mmu_lookup(info->gpa);
+    if ( !mmu )
+    {
+        pr_mmu(mmu, "can't get mmu for addr 0x%"PRIpaddr"", info->gpa);
+        return -EINVAL;
+    }
+
+    spin_lock_irqsave(&mmu->lock, flags);
+    *reg = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+    spin_unlock_irqrestore(&mmu->lock, flags);
+
+    return 1;
+}
+
+static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
+{
+    struct mmu_info *mmu = NULL;
+    unsigned long flags;
+    register_t *reg;
+    paddr_t new_addr, val;
+
+    reg = select_user_reg(guest_cpu_user_regs(), info->dabt.reg);
+
+    /* find corresponding MMU */
+    mmu = mmu_lookup(info->gpa);
+    if ( !mmu )
+    {
+        pr_mmu(mmu, "can't get mmu for addr 0x%"PRIpaddr"", info->gpa);
+        return -EINVAL;
+    }
+
+    spin_lock_irqsave(&mmu->lock, flags);
+
+    /* get new address of translated pagetable */
+    new_addr = mmu_trap_translate_pagetable(mmu, info);
+    if ( INVALID_PADDR != new_addr )
+        val = new_addr;
+    else
+        val = *reg;
+
+    writel(val, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+    spin_unlock_irqrestore(&mmu->lock, flags);
+
+    return 1;
+}
+
+static const struct mmio_handler_ops remoteproc_mmio_handler_ops = {
+    .read_handler  = mmu_mmio_read,
+    .write_handler = mmu_mmio_write,
+};
+
+static int mmu_register_mmio_handler(struct mmu_info *mmu, u32 data)
+{
+    struct domain *dom = (struct domain *) data;
+
+    register_mmio_handler(dom, &remoteproc_mmio_handler_ops,
+                          mmu->mem_start,
+                          mmu->mem_size);
+
+    pr_mmu(mmu, "register mmio handler dom %u base 0x%"PRIpaddr", size 0x%"PRIpaddr"",
+           dom->domain_id, mmu->mem_start, mmu->mem_size);
+
+    return 0;
+}
+
+int remoteproc_iommu_register_mmio_handlers(struct domain *dom)
+{
+    int res;
+
+    if ( is_idle_domain(dom) )
+        return -EPERM;
+
+    /* check is domain allowed to access remoteproc MMU */
+    res = xsm_domctl(XSM_HOOK, dom, XEN_DOMCTL_access_remote_pagetable);
+    if ( res )
+    {
+        printk(XENLOG_ERR "dom %u is not allowed to access remoteproc MMU res (%d)",
+               dom->domain_id, res);
+        return -EPERM;
+    }
+
+    return mmu_for_each(mmu_register_mmio_handler, (u32)dom);
+}
+
+static int mmu_init_all(void)
+{
+    int res;
+
+    res = mmu_for_each(mmu_init, 0);
+    if ( res )
+    {
+        printk("%s error during init %d\n", __func__, res);
+        return res;
+    }
+
+    return 0;
+}
+
+__initcall(mmu_init_all);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/remoteproc_iommu.h b/xen/include/asm-arm/remoteproc_iommu.h
new file mode 100644
index 0000000..6fa78ee
--- /dev/null
+++ b/xen/include/asm-arm/remoteproc_iommu.h
@@ -0,0 +1,82 @@
+/*
+ * xen/include/xen/remoteproc_iommu.h
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2014 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _REMOTEPROC_IOMMU_H_
+#define _REMOTEPROC_IOMMU_H_
+
+#include <asm/types.h>
+
+#define MMU_SECTION_SIZE(shift)     (1UL << (shift))
+#define MMU_SECTION_MASK(shift)     (~(MMU_SECTION_SIZE(shift) - 1))
+
+/* 4096 first level descriptors for "supersection" and "section" */
+#define MMU_PTRS_PER_PGD(mmu)       (1UL << (32 - (mmu->pg_data->pgd_shift)))
+#define MMU_PGD_TABLE_SIZE(mmu)     (MMU_PTRS_PER_PGD(mmu) * sizeof(u32))
+
+/* 256 second level descriptors for "small" and "large" pages */
+#define MMU_PTRS_PER_PTE(mmu)       (1UL << ((mmu->pg_data->pgd_shift) - (mmu->pg_data->pte_shift)))
+#define MMU_PTE_TABLE_SIZE(mmu)     (MMU_PTRS_PER_PTE(mmu) * sizeof(u32))
+
+/* 16 sections in supersection */
+#define MMU_SECTION_PER_SUPER(mmu)  (1UL << ((mmu->pg_data->super_shift) - (mmu->pg_data->section_shift)))
+
+#define pr_mmu(mmu, fmt, ...) \
+    printk(XENLOG_ERR"%s(%d): %s: "fmt"\n", __func__, __LINE__,\
+    ((mmu) ? (mmu)->name : ""), ##__VA_ARGS__)
+
+struct pagetable_data {
+    /* 1st level translation */
+    u32 pgd_shift;
+    u32 pte_shift;
+    u32 super_shift;
+    u32 section_shift;
+    /* 2nd level translation */
+    u32 pte_large_shift;
+};
+
+struct mmu_pagetable {
+    void                *hyp_pagetable;
+    void                *kern_pagetable;
+    paddr_t             paddr;
+    paddr_t             maddr;
+    struct list_head    link_node;
+    u32                 page_counter;
+};
+
+struct mmu_info {
+    const char  *name;
+    const struct pagetable_data *pg_data;
+    /* register where phys pointer to pagetable is stored */
+    u32                 *trap_offsets;
+    paddr_t             mem_start;
+    paddr_t             mem_size;
+    spinlock_t          lock;
+    struct list_head    pagetables_list;
+    u32                 num_traps;
+    void __iomem		*mem_map;
+    paddr_t	(*translate_pfunc)(struct mmu_info *, struct mmu_pagetable *);
+    int (*copy_pagetable_pfunc)(struct mmu_info *mmu, struct mmu_pagetable *pgt);
+    void (*print_pagetable_pfunc)(struct mmu_info *);
+};
+
+int remoteproc_iommu_register_mmio_handlers(struct domain *dom);
+
+paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
+                                                 struct mmu_pagetable *pgt,
+                                                 paddr_t maddr, paddr_t hyp_addr);
+
+#endif /* _REMOTEPROC_IOMMU_H_ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 05/10] arm: omap: introduce iommu translation for IPU remoteproc
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (3 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 04/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 06/10] arm: omap: introduce iommu translation for GPU remoteproc Andrii Tseglytskyi
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The following patch introduced platform specific MMU data
definitions and pagetable translation function for OMAP5 IPU
remoteproc. This MMU is a bit specific - it typically performs
one level translation and map a big chunks of memory. 16 Mb
supersections and 1 Mb sections are mapped instead of 4 Kb pages.
Introduced algorithm performs internal remapping of big sections
to small 4 Kb pages.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/remoteproc/Makefile           |   1 +
 xen/arch/arm/remoteproc/omap_iommu.c       | 325 +++++++++++++++++++++++++++++
 xen/arch/arm/remoteproc/remoteproc_iommu.c |   1 +
 xen/include/asm-arm/remoteproc_iommu.h     |   2 +
 4 files changed, 329 insertions(+)
 create mode 100644 xen/arch/arm/remoteproc/omap_iommu.c

diff --git a/xen/arch/arm/remoteproc/Makefile b/xen/arch/arm/remoteproc/Makefile
index 0b0ee0e..0564c1a 100644
--- a/xen/arch/arm/remoteproc/Makefile
+++ b/xen/arch/arm/remoteproc/Makefile
@@ -1 +1,2 @@
 obj-y += remoteproc_iommu.o
+obj-y += omap_iommu.o
diff --git a/xen/arch/arm/remoteproc/omap_iommu.c b/xen/arch/arm/remoteproc/omap_iommu.c
new file mode 100644
index 0000000..8ed6d0b
--- /dev/null
+++ b/xen/arch/arm/remoteproc/omap_iommu.c
@@ -0,0 +1,325 @@
+/*
+ * xen/arch/arm/platforms/omap_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2014 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/stdbool.h>
+#include <xen/mm.h>
+#include <xen/domain_page.h>
+#include <xen/sched.h>
+
+#include <asm/p2m.h>
+#include <asm-arm/remoteproc_iommu.h>
+
+/*
+ * "L2 table" address mask and size definitions.
+ */
+
+/* register where address of pagetable is stored */
+#define MMU_IPU_TTB_OFFSET          0x4c
+
+/* 1st level translation */
+#define MMU_OMAP_PGD_SHIFT          20
+#define MMU_OMAP_SUPER_SHIFT        24	/* "supersection" - 16 Mb */
+#define MMU_OMAP_SECTION_SHIFT      20	/* "section"  - 1 Mb */
+#define MMU_OMAP_SECOND_LEVEL_SHIFT 10
+
+/* 2nd level translation */
+#define MMU_OMAP_PTE_SMALL_SHIFT    12	/* "small page" - 4Kb */
+#define MMU_OMAP_PTE_LARGE_SHIFT    16	/* "large page" - 64 Kb */
+
+/*
+ * some descriptor attributes.
+ */
+#define IPU_PGD_TABLE       (1 << 0)
+#define IPU_PGD_SECTION     (2 << 0)
+#define IPU_PGD_SUPER       (1 << 18 | 2 << 0)
+
+#define ipu_pgd_is_table(x)     (((x) & 3) == IPU_PGD_TABLE)
+#define ipu_pgd_is_section(x)   (((x) & (1 << 18 | 3)) == IPU_PGD_SECTION)
+#define ipu_pgd_is_super(x)     (((x) & (1 << 18 | 3)) == IPU_PGD_SUPER)
+
+#define IPU_PTE_SMALL       (2 << 0)
+#define IPU_PTE_LARGE       (1 << 0)
+
+#define	OMAP_IPU_MMU_MEM_BASE   0x55082000
+
+static int mmu_omap_copy_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
+
+static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
+
+static u32 ipu_trap_offsets[] = {
+    MMU_IPU_TTB_OFFSET,
+};
+
+static const struct pagetable_data pagetable_ipu_data = {
+    .pgd_shift          = MMU_OMAP_PGD_SHIFT,
+    .super_shift        = MMU_OMAP_SUPER_SHIFT,
+    .section_shift      = MMU_OMAP_SECTION_SHIFT,
+    .pte_shift          = MMU_OMAP_PTE_SMALL_SHIFT,
+    .pte_large_shift    = MMU_OMAP_PTE_LARGE_SHIFT,
+};
+
+struct mmu_info omap_ipu_mmu = {
+    .name           = "IPU_L2_MMU",
+    .pg_data        = &pagetable_ipu_data,
+    .trap_offsets   = ipu_trap_offsets,
+    .mem_start      = OMAP_IPU_MMU_MEM_BASE,
+    .mem_size       = 0x1000,
+    .num_traps          = ARRAY_SIZE(ipu_trap_offsets),
+    .copy_pagetable_pfunc	= mmu_omap_copy_pagetable,
+    .translate_pfunc	= mmu_ipu_translate_pagetable,
+};
+
+static bool translate_supersections_to_pages = true;
+static bool translate_sections_to_pages = true;
+
+static int mmu_omap_copy_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt)
+{
+    void *pagetable = NULL;
+    paddr_t maddr;
+    u32 i;
+
+    ASSERT(mmu);
+    ASSERT(pgt);
+
+    if ( !pgt->paddr )
+        return -EINVAL;
+
+    /* pagetable size can be more than one page */
+    for ( i = 0; i < MMU_PGD_TABLE_SIZE(mmu) / PAGE_SIZE; i++ )
+    {
+        /* lookup address where remoteproc pagetable is stored by kernel */
+        maddr = p2m_lookup(current->domain, pgt->paddr + i * PAGE_SIZE, NULL);
+        if ( INVALID_PADDR == maddr )
+        {
+            pr_mmu(mmu, "failed to translate 0x%"PRIpaddr" to maddr", pgt->paddr + i * PAGE_SIZE);
+            return -EINVAL;
+        }
+
+        pagetable = map_domain_page(maddr>>PAGE_SHIFT);
+        if ( !pagetable )
+        {
+            pr_mmu(mmu, "failed to map pagetable");
+            return -EINVAL;
+        }
+
+        /* copy pagetable to hypervisor memory */
+        clean_and_invalidate_xen_dcache_va_range(pagetable, PAGE_SIZE);
+        memcpy((u32*)((u32)pgt->kern_pagetable + i * PAGE_SIZE), pagetable, PAGE_SIZE);
+
+        unmap_domain_page(pagetable);
+    }
+
+    return 0;
+}
+
+static paddr_t mmu_pte_table_alloc(struct mmu_info *mmu, paddr_t pgd, u32 sect_num,
+                               struct mmu_pagetable *pgt, paddr_t hyp_addr)
+{
+    u32 *pte = NULL;
+    u32 i;
+
+    /* allocate pte table once */
+    if ( 0 == hyp_addr )
+    {
+        pte = xzalloc_bytes(PAGE_SIZE);
+        if ( !pte )
+        {
+            pr_mmu(mmu, "failed to alloc 2nd level table");
+            return INVALID_PADDR;
+        }
+    }
+    else
+    {
+        pte = __va(hyp_addr & MMU_SECTION_MASK(mmu->pg_data->pte_shift));
+    }
+
+    ASSERT(256 == MMU_PTRS_PER_PTE(mmu));
+
+    for ( i = 0; i < MMU_PTRS_PER_PTE(mmu); i++ )
+    {
+        paddr_t paddr, maddr;
+        int res;
+
+        paddr = pgd + (i * PAGE_SIZE);
+        maddr = p2m_lookup(current->domain, paddr, NULL);
+
+        if ( INVALID_PADDR == maddr )
+        {
+            pr_mmu(mmu, "failed to lookup paddr 0x%"PRIpaddr"", paddr);
+            return INVALID_PADDR;
+        }
+
+        if ( !guest_physmap_pinned_range(current->domain, maddr >> PAGE_SHIFT, 0) )
+        {
+            res = guest_physmap_pin_range(current->domain, maddr >> PAGE_SHIFT, 0);
+            if ( res )
+            {
+                pr_mmu(mmu, "can't pin page pfn 0x%"PRIpaddr" mfn 0x%"PRIpaddr" res %d",
+                       paddr, maddr, res);
+                return INVALID_PADDR;
+            }
+        }
+
+        pte[i] = maddr | IPU_PTE_SMALL;
+        pgt->page_counter++;
+    }
+
+    clean_and_invalidate_xen_dcache_va_range(pte, PAGE_SIZE);
+    return __pa(pte) | IPU_PGD_TABLE;
+}
+
+static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt)
+{
+    /* IPU pagetable consists of set of 32 bit pointers */
+    u32 *kern_pgt, *hyp_pgt;
+    const struct pagetable_data *data;
+    u32 i;
+
+    ASSERT(mmu);
+    ASSERT(pgt);
+
+    data = mmu->pg_data;
+    kern_pgt = pgt->kern_pagetable;
+    hyp_pgt = pgt->hyp_pagetable;
+    pgt->page_counter = 0;
+
+    ASSERT(4096 == MMU_PTRS_PER_PGD(mmu));
+
+    /* 1-st level translation */
+    for ( i = 0; i < MMU_PTRS_PER_PGD(mmu); i++ )
+    {
+        paddr_t pd_maddr, pd_paddr, pd_flags, pgd_tmp;
+        paddr_t pgd = kern_pgt[i];
+        u32 pd_mask = 0;
+        int res;
+
+        if ( !pgd )
+        {
+            /* handle the case when second level translation table
+             * was removed from kernel */
+            if ( unlikely(hyp_pgt[i]) )
+            {
+                guest_physmap_unpin_range(current->domain,
+                                (hyp_pgt[i] & MMU_SECTION_MASK(MMU_OMAP_SECOND_LEVEL_SHIFT)) >> PAGE_SHIFT, 0);
+                xfree(__va(hyp_pgt[i] & MMU_SECTION_MASK(MMU_OMAP_SECOND_LEVEL_SHIFT)));
+                hyp_pgt[i] = 0;
+            }
+
+            continue;
+        }
+
+        /* first level pointers have different formats, depending on their type */
+        if ( ipu_pgd_is_super(pgd) )
+            pd_mask = MMU_SECTION_MASK(MMU_OMAP_SUPER_SHIFT);
+        else if ( ipu_pgd_is_section(pgd) )
+            pd_mask = MMU_SECTION_MASK(MMU_OMAP_SECTION_SHIFT);
+        else if ( ipu_pgd_is_table(pgd) )
+            pd_mask = MMU_SECTION_MASK(MMU_OMAP_SECOND_LEVEL_SHIFT);
+
+        pd_paddr = pgd & pd_mask;
+        pd_flags = pgd & ~pd_mask;
+        pd_maddr = p2m_lookup(current->domain, pd_paddr, NULL);
+
+        if ( INVALID_PADDR == pd_maddr )
+        {
+            pr_mmu(mmu, "failed to lookup paddr 0x%"PRIpaddr"", pd_paddr);
+            return INVALID_PADDR;
+        }
+
+        if ( !guest_physmap_pinned_range(current->domain, pd_maddr >> PAGE_SHIFT, 0) )
+        {
+            res = guest_physmap_pin_range(current->domain, pd_maddr >> PAGE_SHIFT, 0);
+            if ( res )
+            {
+                pr_mmu(mmu, "can't pin page pfn 0x%"PRIpaddr" mfn 0x%"PRIpaddr" res %d", pd_paddr, pd_maddr, res);
+                return INVALID_PADDR;
+            }
+        }
+
+        /* "supersection" 16 Mb */
+        if ( ipu_pgd_is_super(pgd) )
+        {
+            /* mapping of 16 Mb chunk is fragmented to 4 Kb pages */
+            if( likely(translate_supersections_to_pages) )
+            {
+                u32 j;
+
+                ASSERT(16 == MMU_SECTION_PER_SUPER(mmu));
+                ASSERT(1048576 == MMU_SECTION_SIZE(data->section_shift));
+
+                /* 16 Mb supersection is divided to 16 sections of 1 MB size */
+                for ( j = 0 ; j < MMU_SECTION_PER_SUPER(mmu); j++ )
+                {
+                    pgd_tmp = (pgd & ~IPU_PGD_SUPER) + (j * MMU_SECTION_SIZE(data->section_shift));
+                    hyp_pgt[i + j] = mmu_pte_table_alloc(mmu, pgd_tmp, i, pgt, hyp_pgt[i + j]);
+                }
+
+                /* move counter after supersection is translated */
+                i += (j - 1);
+            }
+            else
+            {
+                hyp_pgt[i] = pd_maddr | pd_flags;
+            }
+
+        /* "section" 1Mb */
+        }
+        else if ( ipu_pgd_is_section(pgd) )
+        {
+            if ( likely(translate_sections_to_pages) )
+            {
+                pgd_tmp = (pgd & ~IPU_PGD_SECTION);
+                hyp_pgt[i] = mmu_pte_table_alloc(mmu, pgd_tmp, i, pgt, hyp_pgt[i]);
+            }
+            else
+            {
+                hyp_pgt[i] = pd_maddr | pd_flags;
+            }
+
+        /* "table" */
+        }
+        else if ( unlikely(ipu_pgd_is_table(pgd)) )
+        {
+            ASSERT(256 == MMU_PTRS_PER_PTE(mmu));
+
+            hyp_pgt[i] = remoteproc_iommu_translate_second_level(mmu, pgt, pd_maddr, hyp_pgt[i]);
+            hyp_pgt[i] |= pd_flags;
+
+        /* error */
+        }
+        else
+        {
+            pr_mmu(mmu, "unknown entry %u: 0x%"PRIpaddr"", i, pgd);
+            return INVALID_PADDR;
+        }
+    }
+
+    /* force omap IOMMU to use new pagetable */
+    clean_and_invalidate_xen_dcache_va_range(hyp_pgt, MMU_PGD_TABLE_SIZE(mmu));
+    return __pa(hyp_pgt);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/remoteproc/remoteproc_iommu.c b/xen/arch/arm/remoteproc/remoteproc_iommu.c
index e73711a..a2cae25 100644
--- a/xen/arch/arm/remoteproc/remoteproc_iommu.c
+++ b/xen/arch/arm/remoteproc/remoteproc_iommu.c
@@ -32,6 +32,7 @@
 #include <asm/remoteproc_iommu.h>
 
 static struct mmu_info *mmu_list[] = {
+    &omap_ipu_mmu,
 };
 
 #define mmu_for_each(pfunc, data)                       \
diff --git a/xen/include/asm-arm/remoteproc_iommu.h b/xen/include/asm-arm/remoteproc_iommu.h
index 6fa78ee..e581fc3 100644
--- a/xen/include/asm-arm/remoteproc_iommu.h
+++ b/xen/include/asm-arm/remoteproc_iommu.h
@@ -79,4 +79,6 @@ paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
                                                  struct mmu_pagetable *pgt,
                                                  paddr_t maddr, paddr_t hyp_addr);
 
+extern struct mmu_info omap_ipu_mmu;
+
 #endif /* _REMOTEPROC_IOMMU_H_ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 06/10] arm: omap: introduce iommu translation for GPU remoteproc
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (4 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 05/10] arm: omap: introduce iommu translation for IPU remoteproc Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call Andrii Tseglytskyi
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The following patch introduced platform specific MMU data
definitions and pagetable translation function for OMAP5 GPU
remoteproc. Typically GPU MMU performs uses two level address
translation, so algorithm is quite straightforward here -
pagetables are enumerated and all pfns are updated with
corresponding mfns.

Current patch adds functionality, needed for proper handling of
GPU MMU, which is very similar to existing IPU/DSP MMUs.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/remoteproc/omap_iommu.c       | 107 +++++++++++++++++++++++++++++
 xen/arch/arm/remoteproc/remoteproc_iommu.c |   1 +
 xen/include/asm-arm/remoteproc_iommu.h     |   1 +
 3 files changed, 109 insertions(+)

diff --git a/xen/arch/arm/remoteproc/omap_iommu.c b/xen/arch/arm/remoteproc/omap_iommu.c
index 8ed6d0b..f00bfc6 100644
--- a/xen/arch/arm/remoteproc/omap_iommu.c
+++ b/xen/arch/arm/remoteproc/omap_iommu.c
@@ -32,12 +32,23 @@
 /* register where address of pagetable is stored */
 #define MMU_IPU_TTB_OFFSET          0x4c
 
+#define MMU_GPU_TTB_OFFSET_00		0xc84
+#define MMU_GPU_TTB_OFFSET_01		0xc38
+#define MMU_GPU_TTB_OFFSET_02		0xc3c
+#define MMU_GPU_TTB_OFFSET_03		0xc40
+#define MMU_GPU_TTB_OFFSET_04		0xc44
+#define MMU_GPU_TTB_OFFSET_05		0xc48
+#define MMU_GPU_TTB_OFFSET_06		0xc4c
+#define MMU_GPU_TTB_OFFSET_07		0xc50
+
 /* 1st level translation */
 #define MMU_OMAP_PGD_SHIFT          20
 #define MMU_OMAP_SUPER_SHIFT        24	/* "supersection" - 16 Mb */
 #define MMU_OMAP_SECTION_SHIFT      20	/* "section"  - 1 Mb */
 #define MMU_OMAP_SECOND_LEVEL_SHIFT 10
 
+#define MMU_GPU_PGD_SHIFT			22	/* SGX section */
+
 /* 2nd level translation */
 #define MMU_OMAP_PTE_SMALL_SHIFT    12	/* "small page" - 4Kb */
 #define MMU_OMAP_PTE_LARGE_SHIFT    16	/* "large page" - 64 Kb */
@@ -57,15 +68,28 @@
 #define IPU_PTE_LARGE       (1 << 0)
 
 #define	OMAP_IPU_MMU_MEM_BASE   0x55082000
+#define	OMAP_GPU_MMU_MEM_BASE	0x56000000
 
 static int mmu_omap_copy_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
 
 static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
+static paddr_t mmu_gpu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
 
 static u32 ipu_trap_offsets[] = {
     MMU_IPU_TTB_OFFSET,
 };
 
+static u32 sgx_trap_offsets[] = {
+    MMU_GPU_TTB_OFFSET_00,
+    MMU_GPU_TTB_OFFSET_01,
+    MMU_GPU_TTB_OFFSET_02,
+    MMU_GPU_TTB_OFFSET_03,
+    MMU_GPU_TTB_OFFSET_04,
+    MMU_GPU_TTB_OFFSET_05,
+    MMU_GPU_TTB_OFFSET_06,
+    MMU_GPU_TTB_OFFSET_07,
+};
+
 static const struct pagetable_data pagetable_ipu_data = {
     .pgd_shift          = MMU_OMAP_PGD_SHIFT,
     .super_shift        = MMU_OMAP_SUPER_SHIFT,
@@ -85,6 +109,24 @@ struct mmu_info omap_ipu_mmu = {
     .translate_pfunc	= mmu_ipu_translate_pagetable,
 };
 
+static const struct pagetable_data pagetable_gpu_data = {
+    .pgd_shift      = MMU_GPU_PGD_SHIFT,
+    .super_shift    = MMU_GPU_PGD_SHIFT,
+    .section_shift  = MMU_GPU_PGD_SHIFT,
+    .pte_shift      = MMU_OMAP_PTE_SMALL_SHIFT,	/* the same as IPU */
+};
+
+struct mmu_info omap_gpu_mmu = {
+    .name           = "SGX_L2_MMU",
+    .pg_data        = &pagetable_gpu_data,
+    .trap_offsets   = sgx_trap_offsets,
+    .mem_start      = OMAP_GPU_MMU_MEM_BASE,
+    .mem_size       = 0x1000,
+    .num_traps      = ARRAY_SIZE(sgx_trap_offsets),
+    .copy_pagetable_pfunc	= mmu_omap_copy_pagetable,
+    .translate_pfunc    = mmu_gpu_translate_pagetable,
+};
+
 static bool translate_supersections_to_pages = true;
 static bool translate_sections_to_pages = true;
 
@@ -315,6 +357,71 @@ static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_page
     return __pa(hyp_pgt);
 }
 
+static paddr_t mmu_gpu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt)
+{
+    /* GPU pagetable consists of set of 32 bit pointers */
+    u32 *kern_pgt, *hyp_pgt;
+    u32 i;
+
+    ASSERT(mmu);
+    ASSERT(pgt);
+
+    kern_pgt = pgt->kern_pagetable;
+    hyp_pgt = pgt->hyp_pagetable;
+    pgt->page_counter = 0;
+
+    /* 1-st level translation */
+    for ( i = 0; i < MMU_PTRS_PER_PGD(mmu); i++ )
+    {
+        paddr_t pd_maddr, pd_paddr, pd_flags, pgd;
+        u32 pd_mask = MMU_SECTION_MASK(mmu->pg_data->pte_shift);
+        int res;
+
+        pgd = kern_pgt[i];
+        if ( !pgd )
+        {
+            /* handle the case when second level translation table
+             * was removed from kernel */
+            if ( unlikely(hyp_pgt[i]) )
+            {
+                guest_physmap_unpin_range(current->domain,
+                            (hyp_pgt[i] & pd_mask) >> PAGE_SHIFT, 0);
+                xfree(__va(hyp_pgt[i] & pd_mask));
+                hyp_pgt[i] = 0;
+            }
+            continue;
+        }
+
+        pd_paddr = pgd & pd_mask;
+        pd_flags = pgd & ~pd_mask;
+        pd_maddr = p2m_lookup(current->domain, pd_paddr, NULL);
+
+        if ( INVALID_PADDR == pd_maddr )
+        {
+            pr_mmu(mmu, "failed to lookup paddr 0x%"PRIpaddr"", pd_paddr);
+            return INVALID_PADDR;
+        }
+
+        if ( !guest_physmap_pinned_range(current->domain, pd_maddr >> PAGE_SHIFT, 0) )
+        {
+            res = guest_physmap_pin_range(current->domain, pd_maddr >> PAGE_SHIFT, 0);
+            if ( res )
+            {
+                pr_mmu(mmu, "can't pin page pfn 0x%"PRIpaddr" mfn 0x%"PRIpaddr" res %d",
+                       pd_paddr, pd_maddr, res);
+                return INVALID_PADDR;
+            }
+        }
+
+        /* 2-nd level translation */
+        hyp_pgt[i] = remoteproc_iommu_translate_second_level(mmu, pgt, pd_maddr, hyp_pgt[i]);
+        hyp_pgt[i] |= pd_flags;
+    }
+
+    clean_and_invalidate_xen_dcache_va_range(hyp_pgt, MMU_PGD_TABLE_SIZE(mmu));
+    return __pa(hyp_pgt);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/remoteproc/remoteproc_iommu.c b/xen/arch/arm/remoteproc/remoteproc_iommu.c
index a2cae25..c691619 100644
--- a/xen/arch/arm/remoteproc/remoteproc_iommu.c
+++ b/xen/arch/arm/remoteproc/remoteproc_iommu.c
@@ -33,6 +33,7 @@
 
 static struct mmu_info *mmu_list[] = {
     &omap_ipu_mmu,
+    &omap_gpu_mmu,
 };
 
 #define mmu_for_each(pfunc, data)                       \
diff --git a/xen/include/asm-arm/remoteproc_iommu.h b/xen/include/asm-arm/remoteproc_iommu.h
index e581fc3..4983505 100644
--- a/xen/include/asm-arm/remoteproc_iommu.h
+++ b/xen/include/asm-arm/remoteproc_iommu.h
@@ -80,5 +80,6 @@ paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
                                                  paddr_t maddr, paddr_t hyp_addr);
 
 extern struct mmu_info omap_ipu_mmu;
+extern struct mmu_info omap_gpu_mmu;
 
 #endif /* _REMOTEPROC_IOMMU_H_ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (5 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 06/10] arm: omap: introduce iommu translation for GPU remoteproc Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-03  9:48   ` Jan Beulich
  2014-09-13  0:04   ` Stefano Stabellini
  2014-09-02 15:46 ` [PATCH v03 08/10] arm: add trap for remoteproc mmio accesses Andrii Tseglytskyi
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The reason of the patch is the following - some remoteprocs
are quite complicated, and their MMUs can handle several
pagetables. Good example is a OMAP5 GPU, which allocates
several pagetables during it work. Additional requirement
is that not all pagetable physical addresses are stored
in MMU registers. Some pagetables may be allocated and
then their physical addresses are sent to GPU using private
message loop between GPU kernel driver and GPU remoteproc.

Patch is developed to handle this. At any moment of time
kernel can perform translation of such pagetables, before
sending their addresses to GPU remoteproc.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/mm.c                          |  8 ++++++++
 xen/arch/arm/remoteproc/remoteproc_iommu.c | 31 ++++++++++++++++++++++++++++++
 xen/include/asm-arm/remoteproc_iommu.h     |  3 +++
 xen/include/public/memory.h                | 14 +++++++++++++-
 4 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..f848ebb 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -40,6 +40,10 @@
 #include <xsm/xsm.h>
 #include <xen/pfn.h>
 
+#ifdef HAS_REMOTEPROC
+#include <asm/remoteproc_iommu.h>
+#endif
+
 struct domain *dom_xen, *dom_io, *dom_cow;
 
 /* Static start-of-day pagetables that we use before the allocators
@@ -1117,6 +1121,10 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_get_sharing_shared_pages:
     case XENMEM_get_sharing_freed_pages:
         return 0;
+#ifdef HAS_REMOTEPROC
+    case XENMEM_translate_remote_pagetable:
+        return remoteproc_iommu_translate_pagetable(arg);
+#endif
 
     default:
         return -ENOSYS;
diff --git a/xen/arch/arm/remoteproc/remoteproc_iommu.c b/xen/arch/arm/remoteproc/remoteproc_iommu.c
index c691619..d0a90a7 100644
--- a/xen/arch/arm/remoteproc/remoteproc_iommu.c
+++ b/xen/arch/arm/remoteproc/remoteproc_iommu.c
@@ -23,6 +23,7 @@
 #include <xen/init.h>
 #include <xen/sched.h>
 #include <xen/stdbool.h>
+#include <public/memory.h>
 #include <asm/system.h>
 #include <asm/current.h>
 #include <asm/io.h>
@@ -288,6 +289,36 @@ paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
     return __pa(hyp_pte_table);
 }
 
+long remoteproc_iommu_translate_pagetable(XEN_GUEST_HANDLE_PARAM(void) pgt_addr)
+{
+    struct xen_mem_pagetable_addr pgt;
+    struct mmu_info *mmu = NULL;
+    int res;
+
+    /* check is domain allowed to access remoteproc MMU */
+    res = xsm_domctl(XSM_HOOK, current->domain, XEN_DOMCTL_access_remote_pagetable);
+    if ( res )
+    {
+        printk(XENLOG_ERR "dom %u is not allowed to access remoteproc MMU res (%d)",
+               current->domain->domain_id, res);
+        return -EPERM;
+    }
+
+    if ( copy_from_guest(&pgt, pgt_addr, 1) )
+        return -EFAULT;
+
+    mmu = mmu_lookup(pgt.reg);
+    if ( !mmu )
+    {
+        pr_mmu(mmu, "can't get mmu for addr 0x%"PRIpaddr"", pgt.reg);
+        return -EINVAL;
+    }
+
+    pgt.maddr = mmu_translate_pagetable(mmu, pgt.paddr);
+
+    return copy_to_guest(pgt_addr, &pgt, 1);
+}
+
 static int mmu_init(struct mmu_info *mmu, u32 data)
 {
     ASSERT(mmu);
diff --git a/xen/include/asm-arm/remoteproc_iommu.h b/xen/include/asm-arm/remoteproc_iommu.h
index 4983505..6aa441a 100644
--- a/xen/include/asm-arm/remoteproc_iommu.h
+++ b/xen/include/asm-arm/remoteproc_iommu.h
@@ -19,6 +19,7 @@
 #define _REMOTEPROC_IOMMU_H_
 
 #include <asm/types.h>
+#include <xen/guest_access.h>
 
 #define MMU_SECTION_SIZE(shift)     (1UL << (shift))
 #define MMU_SECTION_MASK(shift)     (~(MMU_SECTION_SIZE(shift) - 1))
@@ -79,6 +80,8 @@ paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
                                                  struct mmu_pagetable *pgt,
                                                  paddr_t maddr, paddr_t hyp_addr);
 
+long remoteproc_iommu_translate_pagetable(XEN_GUEST_HANDLE_PARAM(void) pgt_addr);
+
 extern struct mmu_info omap_ipu_mmu;
 extern struct mmu_info omap_gpu_mmu;
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 2c57aa0..2ca8429 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -523,7 +523,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
-/* Next available subop number is 26 */
+#ifdef HAS_REMOTEPROC
+struct xen_mem_pagetable_addr {
+	paddr_t reg;    /* IN:  device base address */
+	paddr_t paddr;  /* IN:  pagetable physical address */
+	paddr_t maddr;  /* OUT: pagetable machine address */
+};
+typedef struct xen_mem_pagetable_addr xen_mem_pagetable_addr_t;
+DEFINE_XEN_GUEST_HANDLE(xen_mem_pagetable_addr_t);
+
+#define XENMEM_translate_remote_pagetable   26
+#endif
+
+/* Next available subop number is 27 */
 
 #endif /* __XEN_PUBLIC_MEMORY_H__ */
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 08/10] arm: add trap for remoteproc mmio accesses
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (6 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-03  9:52   ` Jan Beulich
  2014-09-02 15:46 ` [PATCH v03 09/10] arm: omap: introduce print pagetable function for IPU remoteproc Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 10/10] arm: omap: introduce print pagetable function for GPU remoteproc Andrii Tseglytskyi
  9 siblings, 1 reply; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

The following patch connects already introduced
remoteproc iommu framework with existing trap framework.
Now, when kernel tries to access external MMU registers,
Xen triggers remoteproc iommu, which may perform
proper pfn to mfn translation.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/common/domain.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index e6b4ae6..bc9a181 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -35,6 +35,7 @@
 #include <asm/debugger.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
+#include <asm/remoteproc_iommu.h>
 #include <public/sched.h>
 #include <public/sysctl.h>
 #include <public/vcpu.h>
@@ -376,6 +377,12 @@ struct domain *domain_create(
         spin_unlock(&domlist_update_lock);
     }
 
+#ifdef HAS_REMOTEPROC
+    if ( remoteproc_iommu_register_mmio_handlers(d) )
+        printk("Failed to register remoteprocessor mmu mmio handlers for domain %d\n",
+               d->domain_id);
+#endif
+
     return d;
 
  fail:
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 09/10] arm: omap: introduce print pagetable function for IPU remoteproc
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (7 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 08/10] arm: add trap for remoteproc mmio accesses Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  2014-09-02 15:46 ` [PATCH v03 10/10] arm: omap: introduce print pagetable function for GPU remoteproc Andrii Tseglytskyi
  9 siblings, 0 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

This patch adds a possibility to dump all pagetables of
IPU remoteproc. The only reason to have this patch - is a
low level debug.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/remoteproc/omap_iommu.c | 68 ++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/xen/arch/arm/remoteproc/omap_iommu.c b/xen/arch/arm/remoteproc/omap_iommu.c
index f00bfc6..70867e9 100644
--- a/xen/arch/arm/remoteproc/omap_iommu.c
+++ b/xen/arch/arm/remoteproc/omap_iommu.c
@@ -75,6 +75,8 @@ static int mmu_omap_copy_pagetable(struct mmu_info *mmu, struct mmu_pagetable *p
 static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
 static paddr_t mmu_gpu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
 
+static void mmu_ipu_print_pagetables(struct mmu_info *mmu);
+
 static u32 ipu_trap_offsets[] = {
     MMU_IPU_TTB_OFFSET,
 };
@@ -107,6 +109,7 @@ struct mmu_info omap_ipu_mmu = {
     .num_traps          = ARRAY_SIZE(ipu_trap_offsets),
     .copy_pagetable_pfunc	= mmu_omap_copy_pagetable,
     .translate_pfunc	= mmu_ipu_translate_pagetable,
+    .print_pagetable_pfunc  = mmu_ipu_print_pagetables,
 };
 
 static const struct pagetable_data pagetable_gpu_data = {
@@ -226,6 +229,71 @@ static paddr_t mmu_pte_table_alloc(struct mmu_info *mmu, paddr_t pgd, u32 sect_n
     return __pa(pte) | IPU_PGD_TABLE;
 }
 
+static void mmu_ipu_print_one_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt, u32 index)
+{
+    u32 *pagetable;
+    u32 i, page_counter = 0;
+
+    ASSERT(pgt);
+    ASSERT(pgt->hyp_pagetable);
+    ASSERT(pgt->paddr);
+    ASSERT(pgt->maddr);
+
+    pagetable = pgt->hyp_pagetable;
+
+    pr_mmu(mmu, "pgt[%u][0x%"PRIpaddr"][0x%"PRIpaddr"]", index, pgt->paddr, pgt->maddr);
+    for ( i = 0; i < MMU_PTRS_PER_PGD(mmu); i++ )
+    {
+        paddr_t pgd = pagetable[i];
+        paddr_t *pte_table = NULL;
+
+        if ( !pgd )
+            continue;
+
+        /* "supersection" 16 Mb */
+        /* "section" 1Mb */
+        if ( ipu_pgd_is_super(pgd) || ipu_pgd_is_section(pgd) )
+        {
+            pr_mmu(mmu, "pgt[%u][0x%"PRIpaddr"][0x%"PRIpaddr"] pgd[%u] 0x%"PRIpaddr" (max %lu)",
+                   index, pgt->paddr, pgt->maddr, i, pgd, MMU_PTRS_PER_PGD(mmu));
+
+        /* "table" */
+        }
+        else if ( ipu_pgd_is_table(pgd) )
+        {
+            u32 j;
+
+            pte_table = __va(pgd & MMU_SECTION_MASK(MMU_OMAP_SECOND_LEVEL_SHIFT));
+            if ( !pte_table )
+            {
+                pr_mmu(mmu, "failed to map pagetable");
+                return;
+            }
+
+            for ( j = 0; j < MMU_PTRS_PER_PTE(mmu); j++ )
+            {
+                if ( !pte_table[j] )
+                    continue;
+
+                page_counter++;
+                pr_mmu(mmu, "pgt[%u][0x%"PRIpaddr"][0x%"PRIpaddr"] pgd[%u][0x%"PRIpaddr"]\t pte[%u][0x%"PRIpaddr"] (max %lu)",
+                    index, pgt->paddr, pgt->maddr, i, pgd, j, pte_table[j], MMU_PTRS_PER_PTE(mmu));
+            }
+        }
+    }
+    ASSERT(page_counter == pgt->page_counter);
+}
+
+static void mmu_ipu_print_pagetables(struct mmu_info *mmu)
+{
+	struct mmu_pagetable *pgt;
+	u32 i = 0;
+
+	list_for_each_entry(pgt, &mmu->pagetables_list, link_node) {
+		mmu_ipu_print_one_pagetable(mmu, pgt, i++);
+	}
+}
+
 static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt)
 {
     /* IPU pagetable consists of set of 32 bit pointers */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v03 10/10] arm: omap: introduce print pagetable function for GPU remoteproc
  2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
                   ` (8 preceding siblings ...)
  2014-09-02 15:46 ` [PATCH v03 09/10] arm: omap: introduce print pagetable function for IPU remoteproc Andrii Tseglytskyi
@ 2014-09-02 15:46 ` Andrii Tseglytskyi
  9 siblings, 0 replies; 18+ messages in thread
From: Andrii Tseglytskyi @ 2014-09-02 15:46 UTC (permalink / raw)
  To: Ian Campbell, Stefano Stabellini, Julien Grall, xen-devel

This patch adds a possibility to dump all pagetables of
GPU remoteproc. The only reason to have this patch - is a
low level debug.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/remoteproc/omap_iommu.c | 59 ++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/xen/arch/arm/remoteproc/omap_iommu.c b/xen/arch/arm/remoteproc/omap_iommu.c
index 70867e9..cf43250 100644
--- a/xen/arch/arm/remoteproc/omap_iommu.c
+++ b/xen/arch/arm/remoteproc/omap_iommu.c
@@ -76,6 +76,7 @@ static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_page
 static paddr_t mmu_gpu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt);
 
 static void mmu_ipu_print_pagetables(struct mmu_info *mmu);
+static void mmu_gpu_print_pagetables(struct mmu_info *mmu);
 
 static u32 ipu_trap_offsets[] = {
     MMU_IPU_TTB_OFFSET,
@@ -128,6 +129,7 @@ struct mmu_info omap_gpu_mmu = {
     .num_traps      = ARRAY_SIZE(sgx_trap_offsets),
     .copy_pagetable_pfunc	= mmu_omap_copy_pagetable,
     .translate_pfunc    = mmu_gpu_translate_pagetable,
+    .print_pagetable_pfunc	= mmu_gpu_print_pagetables,
 };
 
 static bool translate_supersections_to_pages = true;
@@ -425,6 +427,63 @@ static paddr_t mmu_ipu_translate_pagetable(struct mmu_info *mmu, struct mmu_page
     return __pa(hyp_pgt);
 }
 
+static void mmu_gpu_print_one_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt, u32 index)
+{
+    u32 *pagetable;
+    u32 i, page_counter = 0;
+
+    ASSERT(pgt);
+    ASSERT(pgt->hyp_pagetable);
+    ASSERT(pgt->paddr);
+    ASSERT(pgt->maddr);
+
+    pagetable = pgt->hyp_pagetable;
+
+    pr_mmu(mmu, "pgt[%u][0x%"PRIpaddr"][0x%"PRIpaddr"]", index, pgt->paddr, pgt->maddr);
+    /* 1-st level translation */
+    for ( i = 0; i < MMU_PTRS_PER_PGD(mmu); i++ )
+    {
+        paddr_t pgd = pagetable[i];
+        paddr_t *pte_table = NULL;
+        u32 j;
+
+        if ( !pgd )
+			continue;
+
+        pr_mmu(mmu, "pgt[%u][0x%"PRIpaddr"][0x%"PRIpaddr"] pgd[%u] 0x%"PRIpaddr" (max %lu)",
+               index, pgt->paddr, pgt->maddr, i, pgd, MMU_PTRS_PER_PGD(mmu));
+
+        pte_table = __va(pgd & MMU_SECTION_MASK(mmu->pg_data->pte_shift));
+        if ( !pte_table )
+        {
+            pr_mmu(mmu, "failed to map pagetable");
+            return;
+        }
+
+        for ( j = 0; j < MMU_PTRS_PER_PTE(mmu); j++ )
+        {
+            if ( !pte_table[j] )
+                continue;
+
+            page_counter++;
+            pr_mmu(mmu, "pgt[%u][0x%"PRIpaddr"][0x%"PRIpaddr"] pgd[%u]\t pte_table[%u] 0x%"PRIpaddr" (max %lu)",
+                   index, pgt->paddr, pgt->maddr, i, j, pte_table[j], MMU_PTRS_PER_PTE(mmu));
+        }
+    }
+    ASSERT(page_counter == pgt->page_counter);
+}
+
+static void mmu_gpu_print_pagetables(struct mmu_info *mmu)
+{
+    struct mmu_pagetable *pgt;
+    u32 i = 0;
+
+    list_for_each_entry(pgt, &mmu->pagetables_list, link_node)
+    {
+        mmu_gpu_print_one_pagetable(mmu, pgt, i++);
+    }
+}
+
 static paddr_t mmu_gpu_translate_pagetable(struct mmu_info *mmu, struct mmu_pagetable *pgt)
 {
     /* GPU pagetable consists of set of 32 bit pointers */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 01/10] xen: implement guest_physmap_pin_range
  2014-09-02 15:46 ` [PATCH v03 01/10] xen: implement guest_physmap_pin_range Andrii Tseglytskyi
@ 2014-09-03  9:43   ` Jan Beulich
  2014-09-11  1:12   ` Julien Grall
  1 sibling, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2014-09-03  9:43 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Julien Grall, Stefano Stabellini, Ian Campbell, xen-devel

>>> On 02.09.14 at 17:46, <andrii.tseglytskyi@globallogic.com> wrote:
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -514,6 +514,26 @@ void guest_physmap_remove_page(struct domain *d,
>  int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
>                                            unsigned int order);
>  
> +static inline int guest_physmap_pin_range(struct domain *d,
> +                                          paddr_t mfn,

So why are MFNs here being typed as paddr_t rather than
unsigned long?

Jan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 02/10] domctl: introduce access_remote_pagetable call
  2014-09-02 15:46 ` [PATCH v03 02/10] domctl: introduce access_remote_pagetable call Andrii Tseglytskyi
@ 2014-09-03  9:46   ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2014-09-03  9:46 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Julien Grall, Stefano Stabellini, Ian Campbell, xen-devel

>>> On 02.09.14 at 17:46, <andrii.tseglytskyi@globallogic.com> wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1067,6 +1067,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_configure_domain              74
>  #define XEN_DOMCTL_dtdev_op                      75
>  #define XEN_DOMCTL_assign_dt_device              76
> +#define XEN_DOMCTL_access_remote_pagetable       77

What's the point of introducing but not handling this?

> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -718,6 +718,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_configure_domain:
>          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CONFIGURE_DOMAIN);
>  
> +    case XEN_DOMCTL_access_remote_pagetable:
> +        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__ACCESS_REMOTE_PAGETABLE);

This would seem too weak a check, as it's an all or nothing one. I
could easily see an entity to be permitted access to one GPU, but
not to others or IPUs.

Also you should Cc the XSM maintainer on XSM changes.

Jan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call
  2014-09-02 15:46 ` [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call Andrii Tseglytskyi
@ 2014-09-03  9:48   ` Jan Beulich
  2014-09-13  0:04   ` Stefano Stabellini
  1 sibling, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2014-09-03  9:48 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Julien Grall, Stefano Stabellini, Ian Campbell, xen-devel

>>> On 02.09.14 at 17:46, <andrii.tseglytskyi@globallogic.com> wrote:
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -523,7 +523,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>  
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
> -/* Next available subop number is 26 */
> +#ifdef HAS_REMOTEPROC
> +struct xen_mem_pagetable_addr {
> +	paddr_t reg;    /* IN:  device base address */
> +	paddr_t paddr;  /* IN:  pagetable physical address */
> +	paddr_t maddr;  /* OUT: pagetable machine address */

There's no paddr_t in the public interface. Do these really need to
be byte granular? Since otherwise xen_pfn_t would be the right
type.

Jan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 08/10] arm: add trap for remoteproc mmio accesses
  2014-09-02 15:46 ` [PATCH v03 08/10] arm: add trap for remoteproc mmio accesses Andrii Tseglytskyi
@ 2014-09-03  9:52   ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2014-09-03  9:52 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Julien Grall, Stefano Stabellini, Ian Campbell, xen-devel

>>> On 02.09.14 at 17:46, <andrii.tseglytskyi@globallogic.com> wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -35,6 +35,7 @@
>  #include <asm/debugger.h>
>  #include <asm/p2m.h>
>  #include <asm/processor.h>
> +#include <asm/remoteproc_iommu.h>

And this doesn't break the build on x86? I think the entire change
ought to go into arch_domain_create().

> @@ -376,6 +377,12 @@ struct domain *domain_create(
>          spin_unlock(&domlist_update_lock);
>      }
>  
> +#ifdef HAS_REMOTEPROC
> +    if ( remoteproc_iommu_register_mmio_handlers(d) )
> +        printk("Failed to register remoteprocessor mmu mmio handlers for domain %d\n",
> +               d->domain_id);
> +#endif

Such guest related printk()-s ought to be log-leveled. Plus with
an isolated change like this it remains entirely unclear whether
just logging a message here is the right kind of error handling.

Jan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 04/10] arm: introduce remoteprocessor iommu module
  2014-09-02 15:46 ` [PATCH v03 04/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
@ 2014-09-11  0:41   ` Julien Grall
  0 siblings, 0 replies; 18+ messages in thread
From: Julien Grall @ 2014-09-11  0:41 UTC (permalink / raw)
  To: Andrii Tseglytskyi, Ian Campbell, Stefano Stabellini, xen-devel

Hi Andrii,

I'm still concerned about security with this patch. Only one guest 
should be able to access to a specific remoteproc at the same time. 
Therefore it should be the same for the MMU.

AFAIU, even with XSM, you are still registering every MMU for every 
domain that want to access to a remoteproc.

You  have to find a way to say "This domain will be able to handle this 
remoteproc", maybe by keeping a list of MMU per domain.

On 02/09/14 08:46, Andrii Tseglytskyi wrote:
> diff --git a/xen/arch/arm/remoteproc/Makefile b/xen/arch/arm/remoteproc/Makefile
> new file mode 100644
> index 0000000..0b0ee0e
> --- /dev/null
> +++ b/xen/arch/arm/remoteproc/Makefile
> @@ -0,0 +1 @@

[..]

> +
> +static struct mmu_info *mmu_list[] = {

Maybe "static const"?

AFAIU, you will add callbacks in this structure in subsequent patch, right?

If so, I would let the remoteproc driver adds the callback itself, even 
if it's done at compile time.

You can give a look to the platform code.

[..]

> +static struct mmu_pagetable *mmu_alloc_pagetable(struct mmu_info *mmu, paddr_t paddr)
> +{
> +    struct mmu_pagetable *pgt;
> +    u32 pgt_size = MMU_PGD_TABLE_SIZE(mmu);
> +
> +    pgt = xzalloc_bytes(sizeof(struct mmu_pagetable));
> +    if ( !pgt )
> +    {
> +        pr_mmu(mmu, "failed to alloc pagetable structure");

allocate

> +        return NULL;
> +    }
> +
> +    /* allocate pagetable managed by hypervisor */
> +    pgt->hyp_pagetable = xzalloc_bytes(pgt_size);
> +    if ( !pgt->hyp_pagetable )
> +    {
> +        pr_mmu(mmu, "failed to alloc private hyp_pagetable");


allocate

> +        return NULL;
> +    }
> +
> +    /* alocate pagetable for ipa storing */

allocate

IPA

[..]

> +paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
> +                                                struct mmu_pagetable *pgt,
> +                                                paddr_t maddr, paddr_t hyp_addr)
> +{
> +    u32 *pte_table = NULL, *hyp_pte_table = NULL;
> +    u32 i;
> +
> +    /* map second level translation table */
> +    pte_table = map_domain_page(maddr>>PAGE_SHIFT);
> +    if ( !pte_table )
> +    {

map_domain_page can't fail. Therefore the check is not necessary.


> +        pr_mmu(mmu, "failed to map pte table");
> +        return INVALID_PADDR;
> +    }
> +
> +    clean_and_invalidate_xen_dcache_va_range(pte_table, PAGE_SIZE);

I would add a comment explaining why "clean_and_invalidate_..." is 
necessary here.

> +    /* allocate new second level pagetable once */
> +    if ( 0 == hyp_addr )
> +    {
> +        hyp_pte_table = xzalloc_bytes(PAGE_SIZE);
> +        if ( !hyp_pte_table )
> +        {
> +            pr_mmu(mmu, "failed to alloc new pte table");
> +            return INVALID_PADDR;
> +        }
> +    }
> +    else
> +    {
> +        hyp_pte_table = __va(hyp_addr & PAGE_MASK);
> +    }

Braces are not necessary.

[..]

> +    unmap_domain_page(pte_table);
> +
> +    clean_and_invalidate_xen_dcache_va_range(hyp_pte_table, MMU_PTE_TABLE_SIZE(mmu));

We use to put a blank line before the last return.

> +    return __pa(hyp_pte_table);
> +}
> +
> +static int mmu_init(struct mmu_info *mmu, u32 data)
> +{
> +    ASSERT(mmu);
> +    ASSERT(!mmu->mem_map);
> +

Shouldn't you check that the remoteproc MMU will work with the current 
board?

As it's generic, people may want to run the same Xen on multiple 
platform, some of them may use remoteproc. We don't want to let the 
other board to use some (if not all) MMU drivers.

[..]

> +static int mmu_register_mmio_handler(struct mmu_info *mmu, u32 data)
> +{
> +    struct domain *dom = (struct domain *) data;
> +
> +    register_mmio_handler(dom, &remoteproc_mmio_handler_ops,
> +                          mmu->mem_start,
> +                          mmu->mem_size);
> +
> +    pr_mmu(mmu, "register mmio handler dom %u base 0x%"PRIpaddr", size 0x%"PRIpaddr"",
> +           dom->domain_id, mmu->mem_start, mmu->mem_size);
> +
> +    return 0;
> +}

Thinking about the MMIO handler, I would extend register_mmio_handler to 
take a data pointer in parameter. This pointer will be your MMU. It will 
avoid the waste of time looking to the MMU and make the code a simpler.


[..]

> +
> +int remoteproc_iommu_register_mmio_handlers(struct domain *dom)
> +{
> +    int res;
> +
> +    if ( is_idle_domain(dom) )
> +        return -EPERM;
> +
> +    /* check is domain allowed to access remoteproc MMU */
> +    res = xsm_domctl(XSM_HOOK, dom, XEN_DOMCTL_access_remote_pagetable);
> +    if ( res )
> +    {
> +        printk(XENLOG_ERR "dom %u is not allowed to access remoteproc MMU res (%d)",
> +               dom->domain_id, res);
> +        return -EPERM;
> +    }
> +
> +    return mmu_for_each(mmu_register_mmio_handler, (u32)dom);
> +}

I don't see any call of this function in xen/arch/arm/domain.c. Is it 
normal?

To continue on my comment at the beginning of this mail, I don't think 
we should register every MMU callbacks to each domain which will use 
remoteproc.

Also, what about domain destruction? Shouldn't you free the MMU page 
table to save space?

> +
> +static int mmu_init_all(void)
> +{
> +    int res;
> +
> +    res = mmu_for_each(mmu_init, 0);
> +    if ( res )
> +    {
> +        printk("%s error during init %d\n", __func__, res);
> +        return res;
> +    }
> +
> +    return 0;
> +}

Again, what will happen if an MMU has been initialized? I guess bad 
things... do_initcalls doesn't check the return of the initcall and will 
silently ignore any error.

I'm not sure if the common (i.e ARM & x86) sense is to return 0 if 
sucess. So may be a panic will be the best here.

> +__initcall(mmu_init_all);
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/remoteproc_iommu.h b/xen/include/asm-arm/remoteproc_iommu.h
> new file mode 100644
> index 0000000..6fa78ee
> --- /dev/null
> +++ b/xen/include/asm-arm/remoteproc_iommu.h
> @@ -0,0 +1,82 @@
> +/*
> + * xen/include/xen/remoteproc_iommu.h

xen/include/asm-arm/remoteproc_iommu.h

> + *
> + * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> + * Copyright (c) 2014 GlobalLogic
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef _REMOTEPROC_IOMMU_H_
> +#define _REMOTEPROC_IOMMU_H_

We use to add the directory in the header name. I.E

__ARM_REMOTEPROC_IOMMU_H__

And of course fixing at the end of the file too :).

> +struct mmu_info {
> +    const char  *name;
> +    const struct pagetable_data *pg_data;
> +    /* register where phys pointer to pagetable is stored */
> +    u32                 *trap_offsets;
> +    paddr_t             mem_start;
> +    paddr_t             mem_size;
> +    spinlock_t          lock;
> +    struct list_head    pagetables_list;
> +    u32                 num_traps;
> +    void __iomem		*mem_map;

Hard Tab.

> +    paddr_t	(*translate_pfunc)(struct mmu_info *, struct mmu_pagetable *);

Same here.

> +    int (*copy_pagetable_pfunc)(struct mmu_info *mmu, struct mmu_pagetable *pgt);
> +    void (*print_pagetable_pfunc)(struct mmu_info *);
> +};
> +
> +int remoteproc_iommu_register_mmio_handlers(struct domain *dom);
> +
> +paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
> +                                                 struct mmu_pagetable *pgt,
> +                                                 paddr_t maddr, paddr_t hyp_addr);
> +
> +#endif /* _REMOTEPROC_IOMMU_H_ */

We use to add the following lines at the end of the file:

/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
  * c-basic-offset: 4
  * indent-tabs-mode: nil
  * End:
  */

Regards,


-- 
Julien Grall

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 01/10] xen: implement guest_physmap_pin_range
  2014-09-02 15:46 ` [PATCH v03 01/10] xen: implement guest_physmap_pin_range Andrii Tseglytskyi
  2014-09-03  9:43   ` Jan Beulich
@ 2014-09-11  1:12   ` Julien Grall
  1 sibling, 0 replies; 18+ messages in thread
From: Julien Grall @ 2014-09-11  1:12 UTC (permalink / raw)
  To: Andrii Tseglytskyi, Ian Campbell, Stefano Stabellini, xen-devel

Hi Andrii,

On 02/09/14 08:46, Andrii Tseglytskyi wrote:
>   int guest_physmap_mark_populate_on_demand(struct domain *d,
>                                             unsigned long gfn,
>                                             unsigned int order)
> @@ -478,10 +551,18 @@ static int apply_one_level(struct domain *d,
>       struct p2m_domain *p2m = &d->arch.p2m;
>       lpae_t pte;
>       const lpae_t orig_pte = *entry;
> +    struct page_info *page = NULL;
>       int rc;
>
>       BUG_ON(level > 3);
>
> +    if ( guest_physmap_pinned_range(d, orig_pte.p2m.base, 0) )

This change is wrong, orig_pte.p2m.base may not be valid. I think you 
have to do this check only on REMOVE and INSERT op.

Also few general questions about this patch:
   - What about the destruction of the domain? Shouldn't you remove the 
flag?
   - In case of REMOVE, if the page is pinned, the error value will be 
ignored (this is because guest_physmap_remove_page is returning void). 
So the upper code (see guest_remove_page in common/memory.c) will think 
the mapping has effectively been removed and will put back the page to 
the memory allocator... This is because we don't take a reference when 
is mapped.

Overall, AFIU your usage in this patch, I don't think we care if the 
guest decides to remove the page from the P2M. The most important things 
is to avoid Xen using the page for another guest. I suspect this could 
be done by taking a reference on the page.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call
  2014-09-02 15:46 ` [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call Andrii Tseglytskyi
  2014-09-03  9:48   ` Jan Beulich
@ 2014-09-13  0:04   ` Stefano Stabellini
  1 sibling, 0 replies; 18+ messages in thread
From: Stefano Stabellini @ 2014-09-13  0:04 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Julien Grall, Stefano Stabellini, Ian Campbell, xen-devel

On Tue, 2 Sep 2014, Andrii Tseglytskyi wrote:
> The reason of the patch is the following - some remoteprocs
> are quite complicated, and their MMUs can handle several
> pagetables. Good example is a OMAP5 GPU, which allocates
> several pagetables during it work. Additional requirement
> is that not all pagetable physical addresses are stored
> in MMU registers. Some pagetables may be allocated and
> then their physical addresses are sent to GPU using private
> message loop between GPU kernel driver and GPU remoteproc.
> 
> Patch is developed to handle this. At any moment of time
> kernel can perform translation of such pagetables, before
> sending their addresses to GPU remoteproc.

Of course this approach assumes that the kernel driver can be modified
to be able to call the new hypercall.
If so, what stops you from using this hypercall in all the other cases
too?


> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/arch/arm/mm.c                          |  8 ++++++++
>  xen/arch/arm/remoteproc/remoteproc_iommu.c | 31 ++++++++++++++++++++++++++++++
>  xen/include/asm-arm/remoteproc_iommu.h     |  3 +++
>  xen/include/public/memory.h                | 14 +++++++++++++-
>  4 files changed, 55 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0a243b0..f848ebb 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -40,6 +40,10 @@
>  #include <xsm/xsm.h>
>  #include <xen/pfn.h>
>  
> +#ifdef HAS_REMOTEPROC
> +#include <asm/remoteproc_iommu.h>
> +#endif
> +
>  struct domain *dom_xen, *dom_io, *dom_cow;
>  
>  /* Static start-of-day pagetables that we use before the allocators
> @@ -1117,6 +1121,10 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case XENMEM_get_sharing_shared_pages:
>      case XENMEM_get_sharing_freed_pages:
>          return 0;
> +#ifdef HAS_REMOTEPROC
> +    case XENMEM_translate_remote_pagetable:
> +        return remoteproc_iommu_translate_pagetable(arg);
> +#endif
>  
>      default:
>          return -ENOSYS;
> diff --git a/xen/arch/arm/remoteproc/remoteproc_iommu.c b/xen/arch/arm/remoteproc/remoteproc_iommu.c
> index c691619..d0a90a7 100644
> --- a/xen/arch/arm/remoteproc/remoteproc_iommu.c
> +++ b/xen/arch/arm/remoteproc/remoteproc_iommu.c
> @@ -23,6 +23,7 @@
>  #include <xen/init.h>
>  #include <xen/sched.h>
>  #include <xen/stdbool.h>
> +#include <public/memory.h>
>  #include <asm/system.h>
>  #include <asm/current.h>
>  #include <asm/io.h>
> @@ -288,6 +289,36 @@ paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
>      return __pa(hyp_pte_table);
>  }
>  
> +long remoteproc_iommu_translate_pagetable(XEN_GUEST_HANDLE_PARAM(void) pgt_addr)
> +{
> +    struct xen_mem_pagetable_addr pgt;
> +    struct mmu_info *mmu = NULL;
> +    int res;
> +
> +    /* check is domain allowed to access remoteproc MMU */
> +    res = xsm_domctl(XSM_HOOK, current->domain, XEN_DOMCTL_access_remote_pagetable);
> +    if ( res )
> +    {
> +        printk(XENLOG_ERR "dom %u is not allowed to access remoteproc MMU res (%d)",
> +               current->domain->domain_id, res);
> +        return -EPERM;
> +    }
> +
> +    if ( copy_from_guest(&pgt, pgt_addr, 1) )
> +        return -EFAULT;
> +
> +    mmu = mmu_lookup(pgt.reg);
> +    if ( !mmu )
> +    {
> +        pr_mmu(mmu, "can't get mmu for addr 0x%"PRIpaddr"", pgt.reg);
> +        return -EINVAL;
> +    }
> +
> +    pgt.maddr = mmu_translate_pagetable(mmu, pgt.paddr);
> +
> +    return copy_to_guest(pgt_addr, &pgt, 1);
> +}
> +
>  static int mmu_init(struct mmu_info *mmu, u32 data)
>  {
>      ASSERT(mmu);
> diff --git a/xen/include/asm-arm/remoteproc_iommu.h b/xen/include/asm-arm/remoteproc_iommu.h
> index 4983505..6aa441a 100644
> --- a/xen/include/asm-arm/remoteproc_iommu.h
> +++ b/xen/include/asm-arm/remoteproc_iommu.h
> @@ -19,6 +19,7 @@
>  #define _REMOTEPROC_IOMMU_H_
>  
>  #include <asm/types.h>
> +#include <xen/guest_access.h>
>  
>  #define MMU_SECTION_SIZE(shift)     (1UL << (shift))
>  #define MMU_SECTION_MASK(shift)     (~(MMU_SECTION_SIZE(shift) - 1))
> @@ -79,6 +80,8 @@ paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
>                                                   struct mmu_pagetable *pgt,
>                                                   paddr_t maddr, paddr_t hyp_addr);
>  
> +long remoteproc_iommu_translate_pagetable(XEN_GUEST_HANDLE_PARAM(void) pgt_addr);
> +
>  extern struct mmu_info omap_ipu_mmu;
>  extern struct mmu_info omap_gpu_mmu;
>  
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index 2c57aa0..2ca8429 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -523,7 +523,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>  
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
> -/* Next available subop number is 26 */
> +#ifdef HAS_REMOTEPROC
> +struct xen_mem_pagetable_addr {
> +	paddr_t reg;    /* IN:  device base address */
> +	paddr_t paddr;  /* IN:  pagetable physical address */
> +	paddr_t maddr;  /* OUT: pagetable machine address */
> +};
> +typedef struct xen_mem_pagetable_addr xen_mem_pagetable_addr_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_mem_pagetable_addr_t);
> +
> +#define XENMEM_translate_remote_pagetable   26
> +#endif
> +
> +/* Next available subop number is 27 */
>  
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */
>  
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2014-09-13  0:04 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-02 15:46 [PATCH v03 00/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
2014-09-02 15:46 ` [PATCH v03 01/10] xen: implement guest_physmap_pin_range Andrii Tseglytskyi
2014-09-03  9:43   ` Jan Beulich
2014-09-11  1:12   ` Julien Grall
2014-09-02 15:46 ` [PATCH v03 02/10] domctl: introduce access_remote_pagetable call Andrii Tseglytskyi
2014-09-03  9:46   ` Jan Beulich
2014-09-02 15:46 ` [PATCH v03 03/10] xsm: arm: create domU_rpc_t security label Andrii Tseglytskyi
2014-09-02 15:46 ` [PATCH v03 04/10] arm: introduce remoteprocessor iommu module Andrii Tseglytskyi
2014-09-11  0:41   ` Julien Grall
2014-09-02 15:46 ` [PATCH v03 05/10] arm: omap: introduce iommu translation for IPU remoteproc Andrii Tseglytskyi
2014-09-02 15:46 ` [PATCH v03 06/10] arm: omap: introduce iommu translation for GPU remoteproc Andrii Tseglytskyi
2014-09-02 15:46 ` [PATCH v03 07/10] arm: introduce remoteproc_mmu_translate_pagetable mem subops call Andrii Tseglytskyi
2014-09-03  9:48   ` Jan Beulich
2014-09-13  0:04   ` Stefano Stabellini
2014-09-02 15:46 ` [PATCH v03 08/10] arm: add trap for remoteproc mmio accesses Andrii Tseglytskyi
2014-09-03  9:52   ` Jan Beulich
2014-09-02 15:46 ` [PATCH v03 09/10] arm: omap: introduce print pagetable function for IPU remoteproc Andrii Tseglytskyi
2014-09-02 15:46 ` [PATCH v03 10/10] arm: omap: introduce print pagetable function for GPU remoteproc Andrii Tseglytskyi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.