From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751754AbdFIOpt (ORCPT ); Fri, 9 Jun 2017 10:45:49 -0400 Received: from foss.arm.com ([217.140.101.70]:42052 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751545AbdFIOpq (ORCPT ); Fri, 9 Jun 2017 10:45:46 -0400 Date: Fri, 9 Jun 2017 15:45:54 +0100 From: Will Deacon To: Peter Zijlstra Cc: torvalds@linux-foundation.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, benh@kernel.crashing.org, mpe@ellerman.id.au, npiggin@gmail.com, linux-kernel@vger.kernel.org, mingo@kernel.org, stern@rowland.harvard.edu, Mel Gorman , Rik van Riel Subject: Re: [RFC][PATCH 1/5] mm: Rework {set,clear,mm}_tlb_flush_pending() Message-ID: <20170609144553.GN13955@arm.com> References: <20170607161501.819948352@infradead.org> <20170607162013.705678923@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170607162013.705678923@infradead.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 07, 2017 at 06:15:02PM +0200, Peter Zijlstra wrote: > Commit: > > af2c1401e6f9 ("mm: numa: guarantee that tlb_flush_pending updates are visible before page table updates") > > added smp_mb__before_spinlock() to set_tlb_flush_pending(). I think we > can solve the same problem without this barrier. > > If instead we mandate that mm_tlb_flush_pending() is used while > holding the PTL we're guaranteed to observe prior > set_tlb_flush_pending() instances. > > For this to work we need to rework migrate_misplaced_transhuge_page() > a little and move the test up into do_huge_pmd_numa_page(). > > Cc: Mel Gorman > Cc: Rik van Riel > Signed-off-by: Peter Zijlstra (Intel) > --- > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -527,18 +527,16 @@ static inline cpumask_t *mm_cpumask(stru > */ > static inline bool mm_tlb_flush_pending(struct mm_struct *mm) > { > - barrier(); > + /* > + * Must be called with PTL held; such that our PTL acquire will have > + * observed the store from set_tlb_flush_pending(). > + */ > return mm->tlb_flush_pending; > } > static inline void set_tlb_flush_pending(struct mm_struct *mm) > { > mm->tlb_flush_pending = true; > - > - /* > - * Guarantee that the tlb_flush_pending store does not leak into the > - * critical section updating the page tables > - */ > - smp_mb__before_spinlock(); > + barrier(); Why do you need the barrier() here? Isn't the ptl unlock sufficient? Will