xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Shanker Donthineni <shankerd@codeaurora.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Philip Elcan <pelcan@codeaurora.org>,
	Steve Capper <Steve.Capper@arm.com>,
	Vikram Sethi <vikrams@codeaurora.org>,
	Wei Chen <Wei.Chen@linaro.org>
Subject: Re: [PATCH V3 09/10] xen/arm: io: Use binary search for mmio handler lookup
Date: Tue, 28 Jun 2016 11:13:33 +0100	[thread overview]
Message-ID: <57724DCD.2050602@arm.com> (raw)
In-Reply-To: <1467059622-14786-9-git-send-email-shankerd@codeaurora.org>

Hi Shanker,

On 27/06/16 21:33, Shanker Donthineni wrote:
> As the number of I/O handlers increase, the overhead associated with
> linear lookup also increases. The system might have maximum of 144
> (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
> it would require 144 iterations for finding a matching handler. Now
> it is time for us to change from linear (complexity O(n)) to a binary
> search (complexity O(log n) for reducing mmio handler lookup overhead.

However, you will add contention because the code is using a spinlock.
I am planning to send the following patch as a prerequisite of this series
to switch from spinlock to read-write lock:

commit b69e975ce25b2c94f7205b0b8329f351327fbcf7
Author: Julien Grall <julien.grall@arm.com>
Date:   Tue Jun 28 11:04:11 2016 +0100

    xen/arm: io: Protect the handlers with a read-write lock
    
    Currently, accessing the I/O handlers does not require to take a lock
    because new handlers are always added at the end of the array. In a
    follow-up patch, this array will be sort to optimize the look up.
    
    Given that most of the time the I/O handlers will not be modify,
    using a spinlock will add contention when multiple vCPU are accessing
    the emulated MMIOs. So use a read-write lock to protected the handlers.
    
    Finally, take the opportunity to re-indent correctly domain_io_init.
    
    Signed-off-by: Julien Grall <julien.grall@arm.com>

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 0156755..5a96836 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -70,23 +70,39 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v,
                                handler->priv);
 }
 
-int handle_mmio(mmio_info_t *info)
+static const struct mmio_handler *find_mmio_handler(struct domain *d,
+                                                    paddr_t gpa)
 {
-    struct vcpu *v = current;
-    int i;
-    const struct mmio_handler *handler = NULL;
-    const struct vmmio *vmmio = &v->domain->arch.vmmio;
+    const struct mmio_handler *handler;
+    unsigned int i;
+    struct vmmio *vmmio = &d->arch.vmmio;
+
+    read_lock(&vmmio->lock);
 
     for ( i = 0; i < vmmio->num_entries; i++ )
     {
         handler = &vmmio->handlers[i];
 
-        if ( (info->gpa >= handler->addr) &&
-             (info->gpa < (handler->addr + handler->size)) )
+        if ( (gpa >= handler->addr) &&
+             (gpa < (handler->addr + handler->size)) )
             break;
     }
 
     if ( i == vmmio->num_entries )
+        handler = NULL;
+
+    read_unlock(&vmmio->lock);
+
+    return handler;
+}
+
+int handle_mmio(mmio_info_t *info)
+{
+    struct vcpu *v = current;
+    const struct mmio_handler *handler = NULL;
+
+    handler = find_mmio_handler(v->domain, info->gpa);
+    if ( !handler )
         return 0;
 
     if ( info->dabt.write )
@@ -104,7 +120,7 @@ void register_mmio_handler(struct domain *d,
 
     BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER);
 
-    spin_lock(&vmmio->lock);
+    write_lock(&vmmio->lock);
 
     handler = &vmmio->handlers[vmmio->num_entries];
 
@@ -113,24 +129,17 @@ void register_mmio_handler(struct domain *d,
     handler->size = size;
     handler->priv = priv;
 
-    /*
-     * handle_mmio is not using the lock to avoid contention.
-     * Make sure the other processors see the new handler before
-     * updating the number of entries
-     */
-    dsb(ish);
-
     vmmio->num_entries++;
 
-    spin_unlock(&vmmio->lock);
+    write_unlock(&vmmio->lock);
 }
 
 int domain_io_init(struct domain *d)
 {
-   spin_lock_init(&d->arch.vmmio.lock);
-   d->arch.vmmio.num_entries = 0;
+    rwlock_init(&d->arch.vmmio.lock);
+    d->arch.vmmio.num_entries = 0;
 
-   return 0;
+    return 0;
 }
 
 /*
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index da1cc2e..32f10f2 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -20,6 +20,7 @@
 #define __ASM_ARM_MMIO_H__
 
 #include <xen/lib.h>
+#include <xen/rwlock.h>
 #include <asm/processor.h>
 #include <asm/regs.h>
 
@@ -51,7 +52,7 @@ struct mmio_handler {
 
 struct vmmio {
     int num_entries;
-    spinlock_t lock;
+    rwlock_t lock;
     struct mmio_handler handlers[MAX_IO_HANDLER];
 };

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-28 10:13 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-27 20:33 [PATCH V3 01/10] arm/gic-v3: Use acpi_table_parse_madt() to parse MADT subtables Shanker Donthineni
2016-06-27 20:33 ` [PATCH V3 02/10] arm/gic-v3: Do early GICD ioremap and clean up Shanker Donthineni
2016-06-27 20:33 ` [PATCH V3 03/10] arm/gic-v3: Move GICR subtable parsing into a new function Shanker Donthineni
2016-06-28 10:36   ` Julien Grall
2016-06-27 20:33 ` [PATCH V3 04/10] arm/gic-v3: Parse per-cpu redistributor entry in GICC subtable Shanker Donthineni
2016-06-28 10:40   ` Julien Grall
2016-06-28 13:51     ` Shanker Donthineni
2016-06-28 14:33       ` Shanker Donthineni
2016-07-06 11:30         ` Julien Grall
2016-07-14 14:01   ` Julien Grall
2016-06-27 20:33 ` [PATCH V3 05/10] xen/arm: vgic: Use dynamic memory allocation for vgic_rdist_region Shanker Donthineni
2016-06-28 10:42   ` Julien Grall
2016-06-27 20:33 ` [PATCH v3 06/10] arm/gic-v3: Remove an unused macro MAX_RDIST_COUNT Shanker Donthineni
2016-06-27 20:33 ` [PATCH V3 07/10] arm: vgic: Split vgic_domain_init() functionality into two functions Shanker Donthineni
2016-06-28 10:44   ` Julien Grall
2016-06-27 20:33 ` [PATCH V3 08/10] arm/io: Use separate memory allocation for mmio handlers Shanker Donthineni
2016-06-27 20:33 ` [PATCH V3 09/10] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
2016-06-28 10:13   ` Julien Grall [this message]
2016-06-28 10:49   ` Julien Grall
2016-06-28 13:19     ` Shanker Donthineni
2016-06-28 13:29       ` Julien Grall
2016-06-27 20:33 ` [PATCH V3 10/10] arm/vgic: Change fixed number of mmio handlers to variable number Shanker Donthineni
2016-06-28 10:30 ` [PATCH V3 01/10] arm/gic-v3: Use acpi_table_parse_madt() to parse MADT subtables Julien Grall
2016-07-14 14:18 ` Stefano Stabellini
2016-07-14 15:30   ` Shanker Donthineni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57724DCD.2050602@arm.com \
    --to=julien.grall@arm.com \
    --cc=Steve.Capper@arm.com \
    --cc=Wei.Chen@linaro.org \
    --cc=pelcan@codeaurora.org \
    --cc=shankerd@codeaurora.org \
    --cc=sstabellini@kernel.org \
    --cc=vikrams@codeaurora.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).