From mboxrd@z Thu Jan 1 00:00:00 1970 From: Arianna Avanzini Subject: [PATCH] arch/arm: unmap partially-mapped memory regions Date: Tue, 2 Sep 2014 01:47:34 +0200 Message-ID: <1409615254-5148-1-git-send-email-avanzini.arianna@gmail.com> References: <5404B2B1.4080401@linaro.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <5404B2B1.4080401@linaro.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xen.org Cc: Ian.Campbell@eu.citrix.com, paolo.valente@unimore.it, keir@xen.org, stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com, dario.faggioli@citrix.com, tim@xen.org, julien.grall@citrix.com, etrudeau@broadcom.com, andrew.cooper3@citrix.com, JBeulich@suse.com, avanzini.arianna@gmail.com, viktor.kleinik@globallogic.com, andrii.tseglytskyi@globallogic.com List-Id: xen-devel@lists.xenproject.org This commit modifies the function apply_p2m_changes() so that it destroys changes performed while mapping a memory region, if errors are seen during the operation. The implemented behaviour includes destroying only mappings created during the latest invocation of apply_p2m_changes(). This is useful to avoid that memory areas remain partially accessible to guests. Signed-off-by: Arianna Avanzini Cc: Dario Faggioli Cc: Paolo Valente Cc: Stefano Stabellini Cc: Julien Grall Cc: Ian Campbell Cc: Jan Beulich Cc: Keir Fraser Cc: Tim Deegan Cc: Ian Jackson Cc: Andrew Cooper Cc: Eric Trudeau Cc: Viktor Kleinik Cc: Andrii Tseglytskyi --- With respect to patch 0002 of the v12 memory_mapping series ([1]): - Add static qualifier to constants that change scope in the context of this patch, as suggested by Julien Grall. Previous history of this patch within the patchset ([1]): v12: - Unmap only the memory area actually affected by the current invocation of apply_p2m_changes(). - Use the correct mattr instead of always using MATTR_DEV when unmapping a partially-mapped memory region. v11: - Handle partially-mapped memory regions regardless of their being I/O-memory regions or not. v10: - Recursively call apply_p2m_changes() on the whole I/O-memory range when unmapping a partially-mapped I/O-memory region. v9: - Let apply_p2m_ranges() unwind its own progress instead of relying on the caller to unmap partially-mapped I/O-memory regions. - Adapt to rework of p2m-related functions for ARM. [1] http://markmail.org/message/yxuie76e7antewyb --- xen/arch/arm/p2m.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 8f83d17..ede839d 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -440,6 +440,14 @@ static bool_t is_mapping_aligned(const paddr_t start_gpaddr, #define P2M_ONE_PROGRESS_NOP 0x1 #define P2M_ONE_PROGRESS 0x10 +/* Helpers to lookup the properties of each level */ +static const paddr_t level_sizes[] = + { ZEROETH_SIZE, FIRST_SIZE, SECOND_SIZE, THIRD_SIZE }; +static const paddr_t level_masks[] = + { ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK }; +static const paddr_t level_shifts[] = + { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT }; + /* * 0 == (P2M_ONE_DESCEND) continue to descend the tree * +ve == (P2M_ONE_PROGRESS_*) handled at this level, continue, flush, @@ -460,13 +468,6 @@ static int apply_one_level(struct domain *d, int mattr, p2m_type_t t) { - /* Helpers to lookup the properties of each level */ - const paddr_t level_sizes[] = - { ZEROETH_SIZE, FIRST_SIZE, SECOND_SIZE, THIRD_SIZE }; - const paddr_t level_masks[] = - { ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK }; - const paddr_t level_shifts[] = - { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT }; const paddr_t level_size = level_sizes[level]; const paddr_t level_mask = level_masks[level]; const paddr_t level_shift = level_shifts[level]; @@ -713,7 +714,8 @@ static int apply_p2m_changes(struct domain *d, int rc, ret; struct p2m_domain *p2m = &d->arch.p2m; lpae_t *first = NULL, *second = NULL, *third = NULL; - paddr_t addr; + paddr_t addr, orig_maddr = maddr; + unsigned int level = 0; unsigned long cur_first_page = ~0, cur_first_offset = ~0, cur_second_offset = ~0; @@ -769,8 +771,9 @@ static int apply_p2m_changes(struct domain *d, * current hardware doesn't support super page mappings at * level 0 anyway */ + level = 1; ret = apply_one_level(d, &first[first_table_offset(addr)], - 1, flush_pt, op, + level, flush_pt, op, start_gpaddr, end_gpaddr, &addr, &maddr, &flush, mattr, t); @@ -790,8 +793,9 @@ static int apply_p2m_changes(struct domain *d, } /* else: second already valid */ + level = 2; ret = apply_one_level(d,&second[second_table_offset(addr)], - 2, flush_pt, op, + level, flush_pt, op, start_gpaddr, end_gpaddr, &addr, &maddr, &flush, mattr, t); @@ -809,8 +813,9 @@ static int apply_p2m_changes(struct domain *d, cur_second_offset = second_table_offset(addr); } + level = 3; ret = apply_one_level(d, &third[third_table_offset(addr)], - 3, flush_pt, op, + level, flush_pt, op, start_gpaddr, end_gpaddr, &addr, &maddr, &flush, mattr, t); @@ -844,6 +849,20 @@ out: if (third) unmap_domain_page(third); if (second) unmap_domain_page(second); if (first) unmap_domain_page(first); + if ( rc < 0 && ( op == INSERT || op == ALLOCATE ) && + addr != start_gpaddr ) + { + BUG_ON(addr == end_gpaddr); + /* + * addr keeps the address of the last successfully-inserted mapping, + * while apply_p2m_changes() considers an address range which is + * exclusive of end_gpaddr: add level_size to addr to obtain the + * right end of the range + */ + apply_p2m_changes(d, REMOVE, + start_gpaddr, addr + level_sizes[level], orig_maddr, + mattr, p2m_invalid); + } spin_unlock(&p2m->lock); -- 2.1.0