From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tim Deegan Subject: Re: [PATCH 03/10] xen: arm: reduce instruction cache and tlb flushes to inner-shareable. Date: Thu, 4 Jul 2013 12:21:32 +0100 Message-ID: <20130704112132.GF40611@ocelot.phlegethon.org> References: <1372435809.8976.169.camel@zakaz.uk.xensource.com> <1372435856-14040-3-git-send-email-ian.campbell@citrix.com> <20130704110701.GC40611@ocelot.phlegethon.org> <20130704111902.GE40611@ocelot.phlegethon.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20130704111902.GE40611@ocelot.phlegethon.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Campbell Cc: julien.grall@citrix.com, xen-devel@lists.xen.org, stefano.stabellini@eu.citrix.com List-Id: xen-devel@lists.xenproject.org At 12:19 +0100 on 04 Jul (1372940342), Tim Deegan wrote: > At 12:07 +0100 on 04 Jul (1372939621), Tim Deegan wrote: > > At 17:10 +0100 on 28 Jun (1372439449), Ian Campbell wrote: > > > Now that Xen maps memory and performs pagetable walks as inner shareable we > > > don't need to push updates down so far when modifying page tables etc. > > > > > > Signed-off-by: Ian Campbell > > > > > --- a/xen/include/asm-arm/arm32/page.h > > > +++ b/xen/include/asm-arm/arm32/page.h > > > @@ -39,8 +39,8 @@ static inline void flush_xen_text_tlb(void) > > > asm volatile ( > > > "isb;" /* Ensure synchronization with previous changes to text */ > > > STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ > > > - STORE_CP32(0, ICIALLU) /* Flush I-cache */ > > > - STORE_CP32(0, BPIALL) /* Flush branch predictor */ > > > + STORE_CP32(0, ICIALLUIS) /* Flush I-cache */ > > > + STORE_CP32(0, BPIALLIS) /* Flush branch predictor */ > > > "dsb;" /* Ensure completion of TLB+BP flush */ > > > "isb;" > > > : : "r" (r0) /*dummy*/ : "memory"); > > > @@ -54,7 +54,7 @@ static inline void flush_xen_data_tlb(void) > > > { > > > register unsigned long r0 asm ("r0"); > > > asm volatile("dsb;" /* Ensure preceding are visible */ > > > - STORE_CP32(0, TLBIALLH) > > > + STORE_CP32(0, TLBIALLHIS) > > > "dsb;" /* Ensure completion of the TLB flush */ > > > "isb;" > > > : : "r" (r0) /* dummy */: "memory"); > > > @@ -69,7 +69,7 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s > > > unsigned long end = va + size; > > > dsb(); /* Ensure preceding are visible */ > > > while ( va < end ) { > > > - asm volatile(STORE_CP32(0, TLBIMVAH) > > > + asm volatile(STORE_CP32(0, TLBIMVAHIS) > > > : : "r" (va) : "memory"); > > > va += PAGE_SIZE; > > > } > > > > That's OK for actual Xen data mappings, map_domain_page() &c., but now > > set_fixmap() and clear_fixmap() need to use a stronger flush whenever > > they map device memory. The same goes for create_xen_entries() when > > ai != WRITEALLOC. > > Ian has pointed out that this is actually making the flushes _stronger_ > (and that in general the TLB flush operations need a bit of attention). > > So I suggest that we drop the TLB-flush parts of this patch for now, and > address that whole area separately. In the meantime, the cache-flush > parts are Acked-by: Tim Deegan . Wait, no - that made no sense. :) I retract the ack, and let's start again with the TLB-flush ops. I don't think anything else in this series relies on this patch. Tim.