From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vineet Gupta Subject: Re: TLB and PTE coherency during munmap Date: Thu, 30 May 2013 12:30:51 +0530 Message-ID: <51A6F923.6010709@synopsys.com> References: <51A45861.1010008@gmail.com> <20130529122728.GA27176@twins.programming.kicks-ass.net> <51A5F7A7.5020604@synopsys.com> <20130529175125.GJ12193@twins.programming.kicks-ass.net> <51A6DDF5.2000406@synopsys.com> <20130530065627.GL12193@twins.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20130530065627.GL12193@twins.programming.kicks-ass.net> Sender: owner-linux-mm@kvack.org To: Peter Zijlstra Cc: Max Filippov , KAMEZAWA Hiroyuki , linux-arch@vger.kernel.org, linux-mm@kvack.org, Ralf Baechle , Chris Zankel , Marc Gauthier , linux-xtensa@linux-xtensa.org, Hugh Dickins List-Id: linux-arch.vger.kernel.org On 05/30/2013 12:26 PM, Peter Zijlstra wrote: > On Thu, May 30, 2013 at 10:34:53AM +0530, Vineet Gupta wrote: >> On 05/29/2013 11:21 PM, Peter Zijlstra wrote: >> >> BTW, since we are on the topic, it seems that we are missing tlb_fast_mode() in >> one spot - unless it is tied to rcu table free stuff. >> >> --------------> >> From: Vineet Gupta >> Date: Thu, 30 May 2013 10:25:30 +0530 >> Subject: [PATCH] mm: tlb_fast_mode check missing in tlb_finish_mmu() >> >> Signed-off-by: Vineet Gupta >> --- >> mm/memory.c | 3 +++ >> 1 file changed, 3 insertions(+) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index d9d5fd9..569ffe1 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -269,6 +269,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long >> start, unsigned long e >> /* keep the page table cache within bounds */ >> check_pgt_cache(); >> >> + if (tlb_fast_mode(tlb)) >> + return; >> + >> for (batch = tlb->local.next; batch; batch = next) { >> next = batch->next; >> free_pages((unsigned long)batch, 0); > Yes I think that is possible. It would shrink the code a little when > fast_mode was unconditionally 1 -- ie. simple UP ;-). Exactly ! Can you please revert with a Reviewed-by/Acked-by so I can formally send it over to linux-mm list. Thx, -Vineet -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from kiruna.synopsys.com ([198.182.44.80]:38957 "EHLO kiruna.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967723Ab3E3HDR (ORCPT ); Thu, 30 May 2013 03:03:17 -0400 Message-ID: <51A6F923.6010709@synopsys.com> Date: Thu, 30 May 2013 12:30:51 +0530 From: Vineet Gupta MIME-Version: 1.0 Subject: Re: TLB and PTE coherency during munmap References: <51A45861.1010008@gmail.com> <20130529122728.GA27176@twins.programming.kicks-ass.net> <51A5F7A7.5020604@synopsys.com> <20130529175125.GJ12193@twins.programming.kicks-ass.net> <51A6DDF5.2000406@synopsys.com> <20130530065627.GL12193@twins.programming.kicks-ass.net> In-Reply-To: <20130530065627.GL12193@twins.programming.kicks-ass.net> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: Peter Zijlstra Cc: Max Filippov , KAMEZAWA Hiroyuki , linux-arch@vger.kernel.org, linux-mm@kvack.org, Ralf Baechle , Chris Zankel , Marc Gauthier , linux-xtensa@linux-xtensa.org, Hugh Dickins Message-ID: <20130530070051.iu3aSmtPhv5P1v872OeVtp3Ka2v2f2VGObPZRoa0TRk@z> On 05/30/2013 12:26 PM, Peter Zijlstra wrote: > On Thu, May 30, 2013 at 10:34:53AM +0530, Vineet Gupta wrote: >> On 05/29/2013 11:21 PM, Peter Zijlstra wrote: >> >> BTW, since we are on the topic, it seems that we are missing tlb_fast_mode() in >> one spot - unless it is tied to rcu table free stuff. >> >> --------------> >> From: Vineet Gupta >> Date: Thu, 30 May 2013 10:25:30 +0530 >> Subject: [PATCH] mm: tlb_fast_mode check missing in tlb_finish_mmu() >> >> Signed-off-by: Vineet Gupta >> --- >> mm/memory.c | 3 +++ >> 1 file changed, 3 insertions(+) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index d9d5fd9..569ffe1 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -269,6 +269,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long >> start, unsigned long e >> /* keep the page table cache within bounds */ >> check_pgt_cache(); >> >> + if (tlb_fast_mode(tlb)) >> + return; >> + >> for (batch = tlb->local.next; batch; batch = next) { >> next = batch->next; >> free_pages((unsigned long)batch, 0); > Yes I think that is possible. It would shrink the code a little when > fast_mode was unconditionally 1 -- ie. simple UP ;-). Exactly ! Can you please revert with a Reviewed-by/Acked-by so I can formally send it over to linux-mm list. Thx, -Vineet