xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V4 08/11] arm/io: Use separate memory allocation for mmio handlers
@ 2016-07-14 16:17 Shanker Donthineni
  2016-07-14 16:18 ` [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
  2016-07-14 16:18 ` [PATCH V4 11/11] arm/vgic: Change fixed number of mmio handlers to variable number Shanker Donthineni
  0 siblings, 2 replies; 7+ messages in thread
From: Shanker Donthineni @ 2016-07-14 16:17 UTC (permalink / raw)
  To: xen-devel, Julien Grall, Stefano Stabellini
  Cc: Philip Elcan, Shanker Donthineni, Vikram Sethi

The number of mmio handlers are limited to a compile time macro
MAX_IO_HANDLER which is 16. This number is not at all sufficient
to support per CPU distributor regions. Either it needs to be
increased to a bigger number, at least CONFIG_NR_CPUS+16, or
allocate a separate memory for mmio handlers dynamically during
domain build.

This patch uses the dynamic allocation strategy to reduce memory
footprint for 'struct domain' instead of static allocation.

Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
Acked-by: Julien Grall <julien.grall@arm.com>
---
Changes since v1:
  Moved registration of vgic_v3/v2 functionality to a new domain_vgic_register()

 xen/arch/arm/domain.c      |  6 ++++--
 xen/arch/arm/io.c          | 13 +++++++++++--
 xen/include/asm-arm/mmio.h |  7 +++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 1365b4a..4010ff2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -527,7 +527,7 @@ void vcpu_destroy(struct vcpu *v)
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config)
 {
-    int rc;
+    int rc, count;
 
     d->arch.relmem = RELMEM_not_started;
 
@@ -550,7 +550,8 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     share_xen_page_with_guest(
         virt_to_page(d->shared_info), d, XENSHARE_writable);
 
-    if ( (rc = domain_io_init(d)) != 0 )
+    count = MAX_IO_HANDLER;
+    if ( (rc = domain_io_init(d, count)) != 0 )
         goto fail;
 
     if ( (rc = p2m_alloc_table(d)) != 0 )
@@ -644,6 +645,7 @@ void arch_domain_destroy(struct domain *d)
     free_xenheap_pages(d->arch.efi_acpi_table,
                        get_order_from_bytes(d->arch.efi_acpi_len));
 #endif
+    domain_io_free(d);
 }
 
 void arch_domain_shutdown(struct domain *d)
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 5a96836..40330f0 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -118,7 +118,7 @@ void register_mmio_handler(struct domain *d,
     struct vmmio *vmmio = &d->arch.vmmio;
     struct mmio_handler *handler;
 
-    BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER);
+    BUG_ON(vmmio->num_entries >= vmmio->max_num_entries);
 
     write_lock(&vmmio->lock);
 
@@ -134,14 +134,23 @@ void register_mmio_handler(struct domain *d,
     write_unlock(&vmmio->lock);
 }
 
-int domain_io_init(struct domain *d)
+int domain_io_init(struct domain *d, int max_count)
 {
     rwlock_init(&d->arch.vmmio.lock);
     d->arch.vmmio.num_entries = 0;
+    d->arch.vmmio.max_num_entries = max_count;
+    d->arch.vmmio.handlers = xzalloc_array(struct mmio_handler, max_count);
+    if ( !d->arch.vmmio.handlers )
+        return -ENOMEM;
 
     return 0;
 }
 
+void domain_io_free(struct domain *d)
+{
+    xfree(d->arch.vmmio.handlers);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index 32f10f2..c620eed 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -52,15 +52,18 @@ struct mmio_handler {
 
 struct vmmio {
     int num_entries;
+    int max_num_entries;
     rwlock_t lock;
-    struct mmio_handler handlers[MAX_IO_HANDLER];
+    struct mmio_handler *handlers;
 };
 
 extern int handle_mmio(mmio_info_t *info);
 void register_mmio_handler(struct domain *d,
                            const struct mmio_handler_ops *ops,
                            paddr_t addr, paddr_t size, void *priv);
-int domain_io_init(struct domain *d);
+int domain_io_init(struct domain *d, int max_count);
+void domain_io_free(struct domain *d);
+
 
 #endif  /* __ASM_ARM_MMIO_H__ */
 
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup
  2016-07-14 16:17 [PATCH V4 08/11] arm/io: Use separate memory allocation for mmio handlers Shanker Donthineni
@ 2016-07-14 16:18 ` Shanker Donthineni
  2016-07-14 16:46   ` Julien Grall
  2016-07-14 16:18 ` [PATCH V4 11/11] arm/vgic: Change fixed number of mmio handlers to variable number Shanker Donthineni
  1 sibling, 1 reply; 7+ messages in thread
From: Shanker Donthineni @ 2016-07-14 16:18 UTC (permalink / raw)
  To: xen-devel, Julien Grall, Stefano Stabellini
  Cc: Philip Elcan, Shanker Donthineni, Vikram Sethi

As the number of I/O handlers increase, the overhead associated with
linear lookup also increases. The system might have maximum of 144
(assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
it would require 144 iterations for finding a matching handler. Now
it is time for us to change from linear (complexity O(n)) to a binary
search (complexity O(log n) for reducing mmio handler lookup overhead.

Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
---
Changes since v3:
  Moved the function bsearch() to common file xen/common/bsearch.c.

Changes since v2:
  Converted mmio lookup code to a critical section.
  Copied the function bsreach() from Linux kernel.

 xen/arch/arm/io.c | 42 +++++++++++++++++++++++++-----------------
 1 file changed, 25 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 40330f0..0471ba8 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -20,6 +20,8 @@
 #include <xen/lib.h>
 #include <xen/spinlock.h>
 #include <xen/sched.h>
+#include <xen/sort.h>
+#include <xen/bsearch.h>
 #include <asm/current.h>
 #include <asm/mmio.h>
 
@@ -70,27 +72,29 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v,
                                handler->priv);
 }
 
-static const struct mmio_handler *find_mmio_handler(struct domain *d,
-                                                    paddr_t gpa)
+static int cmp_mmio_handler(const void *key, const void *elem)
 {
-    const struct mmio_handler *handler;
-    unsigned int i;
-    struct vmmio *vmmio = &d->arch.vmmio;
+    const struct mmio_handler *handler0 = key;
+    const struct mmio_handler *handler1 = elem;
 
-    read_lock(&vmmio->lock);
+    if ( handler0->addr < handler1->addr )
+        return -1;
 
-    for ( i = 0; i < vmmio->num_entries; i++ )
-    {
-        handler = &vmmio->handlers[i];
+    if ( handler0->addr > (handler1->addr + handler1->size) )
+        return 1;
 
-        if ( (gpa >= handler->addr) &&
-             (gpa < (handler->addr + handler->size)) )
-            break;
-    }
+    return 0;
+}
 
-    if ( i == vmmio->num_entries )
-        handler = NULL;
+static const struct mmio_handler *find_mmio_handler(struct vcpu *v, paddr_t gpa)
+{
+    struct vmmio *vmmio = &v->domain->arch.vmmio;
+    struct mmio_handler key = {.addr = gpa};
+    const struct mmio_handler *handler;
 
+    read_lock(&vmmio->lock);
+    handler = bsearch(&key, vmmio->handlers, vmmio->num_entries,
+                      sizeof(*handler), cmp_mmio_handler);
     read_unlock(&vmmio->lock);
 
     return handler;
@@ -99,9 +103,9 @@ static const struct mmio_handler *find_mmio_handler(struct domain *d,
 int handle_mmio(mmio_info_t *info)
 {
     struct vcpu *v = current;
-    const struct mmio_handler *handler = NULL;
+    const struct mmio_handler *handler;
 
-    handler = find_mmio_handler(v->domain, info->gpa);
+    handler = find_mmio_handler(v, info->gpa);
     if ( !handler )
         return 0;
 
@@ -131,6 +135,10 @@ void register_mmio_handler(struct domain *d,
 
     vmmio->num_entries++;
 
+    /* Sort mmio handlers in ascending order based on base address */
+    sort(vmmio->handlers, vmmio->num_entries, sizeof(struct mmio_handler),
+        cmp_mmio_handler, NULL);
+
     write_unlock(&vmmio->lock);
 }
 
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 11/11] arm/vgic: Change fixed number of mmio handlers to variable number
  2016-07-14 16:17 [PATCH V4 08/11] arm/io: Use separate memory allocation for mmio handlers Shanker Donthineni
  2016-07-14 16:18 ` [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
@ 2016-07-14 16:18 ` Shanker Donthineni
  1 sibling, 0 replies; 7+ messages in thread
From: Shanker Donthineni @ 2016-07-14 16:18 UTC (permalink / raw)
  To: xen-devel, Julien Grall, Stefano Stabellini
  Cc: Philip Elcan, Shanker Donthineni, Vikram Sethi

Compute the number of mmio handlers that are required for vGICv3 and
vGICv2 emulation drivers in vgic_v3_init()/vgic_v2_init(). Augment
this variable number of mmio handers to a fixed number MAX_IO_HANDLER
and pass it to domain_io_init() to allocate enough memory.

New code path:
 domain_vgic_register(&count)
   domain_io_init(count + MAX_IO_HANDLER)
     domain_vgic_init()

Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
---
Changes since v3:
  Removed the variable 'mmio_count' from structure 'arch_domain', handle through a function argument.  

 xen/arch/arm/domain.c      | 12 +++++++-----
 xen/arch/arm/vgic-v2.c     |  3 ++-
 xen/arch/arm/vgic-v3.c     |  5 ++++-
 xen/arch/arm/vgic.c        | 10 +++-------
 xen/include/asm-arm/vgic.h |  5 +++--
 5 files changed, 19 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 4010ff2..ddecd45 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -527,7 +527,7 @@ void vcpu_destroy(struct vcpu *v)
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config)
 {
-    int rc, count;
+    int rc, count = 0;
 
     d->arch.relmem = RELMEM_not_started;
 
@@ -550,10 +550,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     share_xen_page_with_guest(
         virt_to_page(d->shared_info), d, XENSHARE_writable);
 
-    count = MAX_IO_HANDLER;
-    if ( (rc = domain_io_init(d, count)) != 0 )
-        goto fail;
-
     if ( (rc = p2m_alloc_table(d)) != 0 )
         goto fail;
 
@@ -590,6 +586,12 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
         goto fail;
     }
 
+    if ( (rc = domain_vgic_register(d, &count)) != 0 )
+        goto fail;
+
+    if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
+        goto fail;
+
     if ( (rc = domain_vgic_init(d, config->nr_spis)) != 0 )
         goto fail;
 
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index f5778e6..928f9af 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -711,7 +711,7 @@ static const struct vgic_ops vgic_v2_ops = {
     .max_vcpus = 8,
 };
 
-int vgic_v2_init(struct domain *d)
+int vgic_v2_init(struct domain *d, int *mmio_count)
 {
     if ( !vgic_v2_hw.enabled )
     {
@@ -721,6 +721,7 @@ int vgic_v2_init(struct domain *d)
         return -ENODEV;
     }
 
+    *mmio_count = 1; /* Only GICD region */
     register_vgic_ops(d, &vgic_v2_ops);
 
     return 0;
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index be9a9a3..f926fe6 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1499,7 +1499,7 @@ static const struct vgic_ops v3_ops = {
     .max_vcpus = 4096,
 };
 
-int vgic_v3_init(struct domain *d)
+int vgic_v3_init(struct domain *d, int *mmio_count)
 {
     if ( !vgic_v3_hw.enabled )
     {
@@ -1509,6 +1509,9 @@ int vgic_v3_init(struct domain *d)
         return -ENODEV;
     }
 
+    /* GICD region + number of Redistributors */
+    *mmio_count = vgic_v3_rdist_count(d) + 1;
+
     register_vgic_ops(d, &v3_ops);
 
     return 0;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index f5e89af..de8a94d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -88,18 +88,18 @@ static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
         rank->vcpu[i] = vcpu;
 }
 
-static int domain_vgic_register(struct domain *d)
+int domain_vgic_register(struct domain *d, int *mmio_count)
 {
     switch ( d->arch.vgic.version )
     {
 #ifdef CONFIG_HAS_GICV3
     case GIC_V3:
-        if ( vgic_v3_init(d) )
+        if ( vgic_v3_init(d, mmio_count) )
            return -ENODEV;
         break;
 #endif
     case GIC_V2:
-        if ( vgic_v2_init(d) )
+        if ( vgic_v2_init(d, mmio_count) )
             return -ENODEV;
         break;
     default:
@@ -124,10 +124,6 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
 
     d->arch.vgic.nr_spis = nr_spis;
 
-    ret = domain_vgic_register(d);
-    if ( ret < 0 )
-        return ret;
-
     spin_lock_init(&d->arch.vgic.lock);
 
     d->arch.vgic.shared_irqs =
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index c3cc4f6..300f461 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -304,9 +304,10 @@ extern int vgic_emulate(struct cpu_user_regs *regs, union hsr hsr);
 extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
 extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
 extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
-int vgic_v2_init(struct domain *d);
-int vgic_v3_init(struct domain *d);
+int vgic_v2_init(struct domain *d, int *mmio_count);
+int vgic_v3_init(struct domain *d, int *mmio_count);
 
+extern int domain_vgic_register(struct domain *d, int *mmio_count);
 extern int vcpu_vgic_free(struct vcpu *v);
 extern int vgic_to_sgi(struct vcpu *v, register_t sgir,
                        enum gic_sgi_mode irqmode, int virq,
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup
  2016-07-14 16:18 ` [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
@ 2016-07-14 16:46   ` Julien Grall
  2016-07-15  2:35     ` Shanker Donthineni
  0 siblings, 1 reply; 7+ messages in thread
From: Julien Grall @ 2016-07-14 16:46 UTC (permalink / raw)
  To: Shanker Donthineni, xen-devel, Stefano Stabellini
  Cc: Philip Elcan, Vikram Sethi

Hi Shanker,

On 14/07/16 17:18, Shanker Donthineni wrote:
> As the number of I/O handlers increase, the overhead associated with
> linear lookup also increases. The system might have maximum of 144
> (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
> it would require 144 iterations for finding a matching handler. Now
> it is time for us to change from linear (complexity O(n)) to a binary
> search (complexity O(log n) for reducing mmio handler lookup overhead.
>
> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
> ---
> Changes since v3:
>    Moved the function bsearch() to common file xen/common/bsearch.c.
>
> Changes since v2:
>    Converted mmio lookup code to a critical section.
>    Copied the function bsreach() from Linux kernel.
>
>   xen/arch/arm/io.c | 42 +++++++++++++++++++++++++-----------------
>   1 file changed, 25 insertions(+), 17 deletions(-)
>
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index 40330f0..0471ba8 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -20,6 +20,8 @@
>   #include <xen/lib.h>
>   #include <xen/spinlock.h>
>   #include <xen/sched.h>
> +#include <xen/sort.h>
> +#include <xen/bsearch.h>
>   #include <asm/current.h>
>   #include <asm/mmio.h>
>
> @@ -70,27 +72,29 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v,
>                                  handler->priv);
>   }
>
> -static const struct mmio_handler *find_mmio_handler(struct domain *d,
> -                                                    paddr_t gpa)
> +static int cmp_mmio_handler(const void *key, const void *elem)

Would it be worth to mention in a comment that we don't support overlapping?

>   {
> -    const struct mmio_handler *handler;
> -    unsigned int i;
> -    struct vmmio *vmmio = &d->arch.vmmio;
> +    const struct mmio_handler *handler0 = key;
> +    const struct mmio_handler *handler1 = elem;
>
> -    read_lock(&vmmio->lock);
> +    if ( handler0->addr < handler1->addr )
> +        return -1;
>
> -    for ( i = 0; i < vmmio->num_entries; i++ )
> -    {
> -        handler = &vmmio->handlers[i];
> +    if ( handler0->addr > (handler1->addr + handler1->size) )
> +        return 1;
>
> -        if ( (gpa >= handler->addr) &&
> -             (gpa < (handler->addr + handler->size)) )
> -            break;
> -    }
> +    return 0;
> +}
>
> -    if ( i == vmmio->num_entries )
> -        handler = NULL;
> +static const struct mmio_handler *find_mmio_handler(struct vcpu *v, paddr_t gpa)

Why have you changed the prototype of find_mmio_handler?

> +{
> +    struct vmmio *vmmio = &v->domain->arch.vmmio;
> +    struct mmio_handler key = {.addr = gpa};

I know it is not currently the case, but should not we take into account 
the size of the access?

> +    const struct mmio_handler *handler;
>
> +    read_lock(&vmmio->lock);
> +    handler = bsearch(&key, vmmio->handlers, vmmio->num_entries,
> +                      sizeof(*handler), cmp_mmio_handler);
>       read_unlock(&vmmio->lock);
>
>       return handler;
> @@ -99,9 +103,9 @@ static const struct mmio_handler *find_mmio_handler(struct domain *d,
>   int handle_mmio(mmio_info_t *info)
>   {
>       struct vcpu *v = current;
> -    const struct mmio_handler *handler = NULL;
> +    const struct mmio_handler *handler;

Why this change?

>
> -    handler = find_mmio_handler(v->domain, info->gpa);
> +    handler = find_mmio_handler(v, info->gpa);

ditto.

>       if ( !handler )
>           return 0;
>
> @@ -131,6 +135,10 @@ void register_mmio_handler(struct domain *d,
>
>       vmmio->num_entries++;
>
> +    /* Sort mmio handlers in ascending order based on base address */
> +    sort(vmmio->handlers, vmmio->num_entries, sizeof(struct mmio_handler),
> +        cmp_mmio_handler, NULL);

The indentation looks wrong here.

> +
>       write_unlock(&vmmio->lock);
>   }
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup
  2016-07-14 16:46   ` Julien Grall
@ 2016-07-15  2:35     ` Shanker Donthineni
  2016-07-15  9:52       ` Julien Grall
  2016-07-15 13:18       ` Shanker Donthineni
  0 siblings, 2 replies; 7+ messages in thread
From: Shanker Donthineni @ 2016-07-15  2:35 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Stefano Stabellini; +Cc: Philip Elcan, Vikram Sethi

Hi Julien,

On 07/14/2016 11:46 AM, Julien Grall wrote:
> Hi Shanker,
>
> On 14/07/16 17:18, Shanker Donthineni wrote:
>> As the number of I/O handlers increase, the overhead associated with
>> linear lookup also increases. The system might have maximum of 144
>> (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
>> it would require 144 iterations for finding a matching handler. Now
>> it is time for us to change from linear (complexity O(n)) to a binary
>> search (complexity O(log n) for reducing mmio handler lookup overhead.
>>
>> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
>> ---
>> Changes since v3:
>>    Moved the function bsearch() to common file xen/common/bsearch.c.
>>
>> Changes since v2:
>>    Converted mmio lookup code to a critical section.
>>    Copied the function bsreach() from Linux kernel.
>>
>>   xen/arch/arm/io.c | 42 +++++++++++++++++++++++++-----------------
>>   1 file changed, 25 insertions(+), 17 deletions(-)
>>
>> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
>> index 40330f0..0471ba8 100644
>> --- a/xen/arch/arm/io.c
>> +++ b/xen/arch/arm/io.c
>> @@ -20,6 +20,8 @@
>>   #include <xen/lib.h>
>>   #include <xen/spinlock.h>
>>   #include <xen/sched.h>
>> +#include <xen/sort.h>
>> +#include <xen/bsearch.h>
>>   #include <asm/current.h>
>>   #include <asm/mmio.h>
>>
>> @@ -70,27 +72,29 @@ static int handle_write(const struct mmio_handler 
>> *handler, struct vcpu *v,
>>                                  handler->priv);
>>   }
>>
>> -static const struct mmio_handler *find_mmio_handler(struct domain *d,
>> -                                                    paddr_t gpa)
>> +static int cmp_mmio_handler(const void *key, const void *elem)
>
> Would it be worth to mention in a comment that we don't support 
> overlapping?
>

Sure, I'll do.

>>   {
>> -    const struct mmio_handler *handler;
>> -    unsigned int i;
>> -    struct vmmio *vmmio = &d->arch.vmmio;
>> +    const struct mmio_handler *handler0 = key;
>> +    const struct mmio_handler *handler1 = elem;
>>
>> -    read_lock(&vmmio->lock);
>> +    if ( handler0->addr < handler1->addr )
>> +        return -1;
>>
>> -    for ( i = 0; i < vmmio->num_entries; i++ )
>> -    {
>> -        handler = &vmmio->handlers[i];
>> +    if ( handler0->addr > (handler1->addr + handler1->size) )
>> +        return 1;
>>
>> -        if ( (gpa >= handler->addr) &&
>> -             (gpa < (handler->addr + handler->size)) )
>> -            break;
>> -    }
>> +    return 0;
>> +}
>>
>> -    if ( i == vmmio->num_entries )
>> -        handler = NULL;
>> +static const struct mmio_handler *find_mmio_handler(struct vcpu *v, 
>> paddr_t gpa)
>
> Why have you changed the prototype of find_mmio_handler?
>

As such there is no reason for this change, I'll revert this change in 
next patchset.
>> +{
>> +    struct vmmio *vmmio = &v->domain->arch.vmmio;
>> +    struct mmio_handler key = {.addr = gpa};
>
> I know it is not currently the case, but should not we take into 
> account the size of the access?
>

I agree with you, we definitely need to consider size of the access for 
traps/emulation drivers similar to what Linux KVM code does currently. 
Do you know which emulation drivers are using non aligned accesses?


>> +    const struct mmio_handler *handler;
>>
>> +    read_lock(&vmmio->lock);
>> +    handler = bsearch(&key, vmmio->handlers, vmmio->num_entries,
>> +                      sizeof(*handler), cmp_mmio_handler);
>>       read_unlock(&vmmio->lock);
>>
>>       return handler;
>> @@ -99,9 +103,9 @@ static const struct mmio_handler 
>> *find_mmio_handler(struct domain *d,
>>   int handle_mmio(mmio_info_t *info)
>>   {
>>       struct vcpu *v = current;
>> -    const struct mmio_handler *handler = NULL;
>> +    const struct mmio_handler *handler;
>
> Why this change?
>

No need to initialize this local variable because the following line 
always initializes it. I'll revert this change anyway to avoid confusion.

>>
>> -    handler = find_mmio_handler(v->domain, info->gpa);
>> +    handler = find_mmio_handler(v, info->gpa);
>
> ditto.
>
>>       if ( !handler )
>>           return 0;
>>
>> @@ -131,6 +135,10 @@ void register_mmio_handler(struct domain *d,
>>
>>       vmmio->num_entries++;
>>
>> +    /* Sort mmio handlers in ascending order based on base address */
>> +    sort(vmmio->handlers, vmmio->num_entries, sizeof(struct 
>> mmio_handler),
>> +        cmp_mmio_handler, NULL);
>
> The indentation looks wrong here.
>

I don't quite understand what you mean. It has 8 spaces, do I need more 
than 8 spaces or something else?
>> +
>>       write_unlock(&vmmio->lock);
>>   }
>>
>>
>

-- 
Shanker Donthineni
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup
  2016-07-15  2:35     ` Shanker Donthineni
@ 2016-07-15  9:52       ` Julien Grall
  2016-07-15 13:18       ` Shanker Donthineni
  1 sibling, 0 replies; 7+ messages in thread
From: Julien Grall @ 2016-07-15  9:52 UTC (permalink / raw)
  To: shankerd, xen-devel, Stefano Stabellini; +Cc: Philip Elcan, Vikram Sethi

Hello Shanker,

On 15/07/16 03:35, Shanker Donthineni wrote:
> On 07/14/2016 11:46 AM, Julien Grall wrote:

[...]

>>> +{
>>> +    struct vmmio *vmmio = &v->domain->arch.vmmio;
>>> +    struct mmio_handler key = {.addr = gpa};
>>
>> I know it is not currently the case, but should not we take into
>> account the size of the access?
>>
>
> I agree with you, we definitely need to consider size of the access for
> traps/emulation drivers similar to what Linux KVM code does currently.
> Do you know which emulation drivers are using non aligned accesses?

Sorry, I don't understand your question.

>
>
>>> +    const struct mmio_handler *handler;
>>>
>>> +    read_lock(&vmmio->lock);
>>> +    handler = bsearch(&key, vmmio->handlers, vmmio->num_entries,
>>> +                      sizeof(*handler), cmp_mmio_handler);
>>>       read_unlock(&vmmio->lock);
>>>
>>>       return handler;

[...]

>>> @@ -131,6 +135,10 @@ void register_mmio_handler(struct domain *d,
>>>
>>>       vmmio->num_entries++;
>>>
>>> +    /* Sort mmio handlers in ascending order based on base address */
>>> +    sort(vmmio->handlers, vmmio->num_entries, sizeof(struct
>>> mmio_handler),
>>> +        cmp_mmio_handler, NULL);
>>
>> The indentation looks wrong here.
>>
>
> I don't quite understand what you mean. It has 8 spaces, do I need more
> than 8 spaces or something else?

If the list of arguments is on multiples lines, the lines should be 
aligned to the first arguments. I.e:

      sort(foo, bar,
           fish);

Your editor should already do it.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup
  2016-07-15  2:35     ` Shanker Donthineni
  2016-07-15  9:52       ` Julien Grall
@ 2016-07-15 13:18       ` Shanker Donthineni
  1 sibling, 0 replies; 7+ messages in thread
From: Shanker Donthineni @ 2016-07-15 13:18 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Stefano Stabellini; +Cc: Philip Elcan, Vikram Sethi

Hi Julien,

Are you okay to insert the below two validation checks before inserting 
mmio handler?

void register_mmio_handler(struct domain *d,
                            const struct mmio_handler_ops *ops,
                            paddr_t addr, paddr_t size, void *priv)
{
     struct vmmio *vmmio = &d->arch.vmmio;
     struct mmio_handler *handler;

     /* Don't allow overlap regions */
     BUG_ON(find_mmio_handler(d, addr);
     BUG_ON(find_mmio_handler(d, addr + size - 1);

     BUG_ON(vmmio->num_entries >= vmmio->max_num_entries);

-- 
Shanker Donthineni
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-07-15 13:18 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-14 16:17 [PATCH V4 08/11] arm/io: Use separate memory allocation for mmio handlers Shanker Donthineni
2016-07-14 16:18 ` [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
2016-07-14 16:46   ` Julien Grall
2016-07-15  2:35     ` Shanker Donthineni
2016-07-15  9:52       ` Julien Grall
2016-07-15 13:18       ` Shanker Donthineni
2016-07-14 16:18 ` [PATCH V4 11/11] arm/vgic: Change fixed number of mmio handlers to variable number Shanker Donthineni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).