All of lore.kernel.org
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: xen-devel@lists.xen.org
Cc: proskurin@sec.in.tum.de, Julien Grall <julien.grall@arm.com>,
	sstabellini@kernel.org, steve.capper@arm.com,
	wei.chen@linaro.org
Subject: [for-4.8][PATCH v2 14/23] xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry
Date: Thu, 15 Sep 2016 12:28:30 +0100	[thread overview]
Message-ID: <1473938919-31976-15-git-send-email-julien.grall@arm.com> (raw)
In-Reply-To: <1473938919-31976-1-git-send-email-julien.grall@arm.com>

The function p2m_cache_flush can be re-implemented using the generic
function p2m_get_entry by iterating over the range and using the mapping
order given by the callee.

As the current implementation, no preemption is implemented, although
the comment in the current code claimed it. As the function is called by
a DOMCTL with a region of 1GB maximum, I think the preemption can be
left unimplemented for now.

Finally drop the operation CACHEFLUSH in apply_one_level as nobody is
using it anymore. Note that the function could have been dropped in one
go at the end, however I find easier to drop the operations one by one
avoiding a big deletion in the patch that convert the last operation.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    The loop pattern will be very similar for the reliquish function.
    It might be possible to extract it in a separate function.

    Changes in v2:
        - Introduce and use gfn_next_boundary
        - Flush all the mapping in a superpage rather than page by page.
        - Update doc
---
 xen/arch/arm/p2m.c | 83 ++++++++++++++++++++++++++++++++----------------------
 1 file changed, 50 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ddee258..fa58f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -62,6 +62,22 @@ static inline void p2m_write_lock(struct p2m_domain *p2m)
     write_lock(&p2m->lock);
 }
 
+/*
+ * Return the start of the next mapping based on the order of the
+ * current one.
+ */
+static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
+{
+    /*
+     * The order corresponds to the order of the mapping (or invalid
+     * range) in the page table. So we need to align the GFN before
+     * incrementing.
+     */
+    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
+
+    return gfn_add(gfn, 1UL << order);
+}
+
 static void p2m_flush_tlb(struct p2m_domain *p2m);
 
 static inline void p2m_write_unlock(struct p2m_domain *p2m)
@@ -734,7 +750,6 @@ enum p2m_operation {
     INSERT,
     REMOVE,
     RELINQUISH,
-    CACHEFLUSH,
     MEMACCESS,
 };
 
@@ -993,36 +1008,6 @@ static int apply_one_level(struct domain *d,
          */
         return P2M_ONE_PROGRESS;
 
-    case CACHEFLUSH:
-        if ( !p2m_valid(orig_pte) )
-        {
-            *addr = (*addr + level_size) & level_mask;
-            return P2M_ONE_PROGRESS_NOP;
-        }
-
-        if ( level < 3 && p2m_table(orig_pte) )
-            return P2M_ONE_DESCEND;
-
-        /*
-         * could flush up to the next superpage boundary, but would
-         * need to be careful about preemption, so just do one 4K page
-         * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will
-         * continue to loop over the rest of the range.
-         */
-        if ( p2m_is_ram(orig_pte.p2m.type) )
-        {
-            unsigned long offset = paddr_to_pfn(*addr & ~level_mask);
-            flush_page_to_ram(orig_pte.p2m.base + offset);
-
-            *addr += PAGE_SIZE;
-            return P2M_ONE_PROGRESS;
-        }
-        else
-        {
-            *addr += PAGE_SIZE;
-            return P2M_ONE_PROGRESS_NOP;
-        }
-
     case MEMACCESS:
         if ( level < 3 )
         {
@@ -1571,12 +1556,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     gfn_t end = gfn_add(start, nr);
+    gfn_t next_gfn;
+    p2m_type_t t;
+    unsigned int order;
 
     start = gfn_max(start, p2m->lowest_mapped_gfn);
     end = gfn_min(end, p2m->max_mapped_gfn);
 
-    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
-                             0, p2m_invalid, d->arch.p2m.default_access);
+    /*
+     * The operation cache flush will invalidate the RAM assigned to the
+     * guest in a given range. It will not modify the page table and
+     * flushing the cache whilst the page is used by another CPU is
+     * fine. So using read-lock is fine here.
+     */
+    p2m_read_lock(p2m);
+
+    for ( ; gfn_x(start) < gfn_x(end); start = next_gfn )
+    {
+        mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order);
+
+        next_gfn = gfn_next_boundary(start, order);
+
+        /* Skip hole and non-RAM page */
+        if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) )
+            continue;
+
+        /* XXX: Implement preemption */
+        while ( gfn_x(start) < gfn_x(next_gfn) )
+        {
+            flush_page_to_ram(mfn_x(mfn));
+
+            start = gfn_add(start, 1);
+            mfn = mfn_add(mfn, 1);
+        }
+    }
+
+    p2m_read_unlock(p2m);
+
+    return 0;
 }
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-09-15 11:28 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-15 11:28 [for-4.8][PATCH v2 00/23] xen/arm: Rework the P2M code to follow break-before-make sequence Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 01/23] xen/arm: do_trap_instr_abort_guest: Move the IPA computation out of the switch Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 02/23] xen/arm: p2m: Store in p2m_domain whether we need to clean the entry Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 03/23] xen/arm: p2m: Rename parameter in p2m_{remove, write}_pte Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 04/23] xen/arm: p2m: Use typesafe gfn in p2m_mem_access_radix_set Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 05/23] xen/arm: p2m: Add a back pointer to domain in p2m_domain Julien Grall
2016-09-17  1:16   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 06/23] xen/arm: traps: Move MMIO emulation code in a separate helper Julien Grall
2016-09-17  1:17   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 07/23] xen/arm: traps: Check the P2M before injecting a data/instruction abort Julien Grall
2016-09-17  1:22   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 08/23] xen/arm: p2m: Invalidate the TLBs when write unlocking the p2m Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 09/23] xen/arm: p2m: Change the type of level_shifts from paddr_t to uint8_t Julien Grall
2016-09-17  1:23   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 10/23] xen/arm: p2m: Move the lookup helpers at the top of the file Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 11/23] xen/arm: p2m: Introduce p2m_get_root_pointer and use it in __p2m_lookup Julien Grall
2016-09-17  1:26   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 12/23] xen/arm: p2m: Introduce p2m_get_entry and use it to implement __p2m_lookup Julien Grall
2016-09-17  1:36   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 13/23] xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry Julien Grall
2016-09-15 11:28 ` Julien Grall [this message]
2016-09-17  1:42   ` [for-4.8][PATCH v2 14/23] xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 15/23] xen/arm: p2m: Make p2m_{valid, table, mapping} helpers inline Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 16/23] xen/arm: p2m: Introduce a helper to check if an entry is a superpage Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 17/23] xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry Julien Grall
2016-09-22  2:18   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 18/23] xen/arm: p2m: Re-implement relinquish_p2m_mapping using p2m_{get, set}_entry Julien Grall
2016-09-20  2:14   ` Stefano Stabellini
2016-09-15 11:28 ` [for-4.8][PATCH v2 19/23] xen/arm: p2m: Re-implement p2m_remove_using using p2m_set_entry Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 20/23] xen/arm: p2m: Re-implement p2m_insert_mapping " Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 21/23] xen/arm: p2m: Re-implement p2m_set_mem_access using p2m_{set, get}_entry Julien Grall
2016-09-15 11:41   ` Razvan Cojocaru
2016-09-15 11:28 ` [for-4.8][PATCH v2 22/23] xen/arm: p2m: Do not handle shattering in p2m_create_table Julien Grall
2016-09-15 13:50   ` Julien Grall
2016-09-15 11:28 ` [for-4.8][PATCH v2 23/23] xen/arm: p2m: Export p2m_*_lock helpers Julien Grall
2016-09-15 17:23 ` [for-4.8][PATCH v2 00/23] xen/arm: Rework the P2M code to follow break-before-make sequence Tamas K Lengyel
2016-09-28  1:14 ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1473938919-31976-15-git-send-email-julien.grall@arm.com \
    --to=julien.grall@arm.com \
    --cc=proskurin@sec.in.tum.de \
    --cc=sstabellini@kernel.org \
    --cc=steve.capper@arm.com \
    --cc=wei.chen@linaro.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.