From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753867AbdFWPrE (ORCPT ); Fri, 23 Jun 2017 11:47:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:50888 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751496AbdFWPrC (ORCPT ); Fri, 23 Jun 2017 11:47:02 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E05E722B6C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org MIME-Version: 1.0 In-Reply-To: <20170623084219.k4lrorgtlshej7ri@pd.tnic> References: <91f24a6145b2077f992902891f8fa59abe5c8696.1498022414.git.luto@kernel.org> <20170621184424.eixb2jdyy66xq4hg@pd.tnic> <20170622072449.4rc4bnvucn7usuak@pd.tnic> <20170622145914.tzqdulshlssiywj4@pd.tnic> <20170622172220.wf3egiwx2kqbxbi2@pd.tnic> <20170623084219.k4lrorgtlshej7ri@pd.tnic> From: Andy Lutomirski Date: Fri, 23 Jun 2017 08:46:40 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm To: Borislav Petkov Cc: Andy Lutomirski , X86 ML , "linux-kernel@vger.kernel.org" , Linus Torvalds , Andrew Morton , Mel Gorman , "linux-mm@kvack.org" , Nadav Amit , Rik van Riel , Dave Hansen , Arjan van de Ven , Peter Zijlstra Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 23, 2017 at 1:42 AM, Borislav Petkov wrote: > On Thu, Jun 22, 2017 at 11:08:38AM -0700, Andy Lutomirski wrote: >> Yes, I agree it's confusing. There really are three numbers. Those >> numbers are: the latest generation, the generation that this CPU has >> caught up to, and the generation that the requester of the flush we're >> currently handling has asked us to catch up to. I don't see a way to >> reduce the complexity. > > Yeah, can you pls put that clarification what what is, over it. It > explains it nicely what the check is supposed to do. Done. I've tried to improve a bunch of the comments in this function. > >> >> The flush IPI hits after a switch_mm_irqs_off() call notices the >> >> change from 1 to 2. switch_mm_irqs_off() will do a full flush and >> >> increment the local tlb_gen to 2, and the IPI handler for the partial >> >> flush will see local_tlb_gen == mm_tlb_gen - 1 (because local_tlb_gen >> >> == 2 and mm_tlb_gen == 3) and do a partial flush. >> > >> > Why, the 2->3 flush has f->end == TLB_FLUSH_ALL. >> > >> > That's why you have this thing in addition to the tlb_gen. >> >> Yes. The idea is that we only do remote partial flushes when it's >> 100% obvious that it's safe. > > So why wouldn't my simplified suggestion work then? > > if (f->end != TLB_FLUSH_ALL && > mm_tlb_gen == local_tlb_gen + 1) > > 1->2 is a partial flush - gets promoted to a full one > 2->3 is a full flush - it will get executed as one due to the f->end setting to > TLB_FLUSH_ALL. This could still fail in some cases, I think. Suppose 1->2 is a partial flush and 2->3 is a full flush. We could have this order of events: - CPU 1: Partial flush. Increase context.tlb_gen to 2 and send IPI. - CPU 0: switch_mm(), observe mm_tlb_gen == 2, set local_tlb_gen to 2. - CPU 2: Full flush. Increase context.tlb_gen to 3 and send IPI. - CPU 0: Receive partial flush IPI. mm_tlb_gen == 2 and local_tlb_gen == 3. Do __flush_tlb_single() and set local_tlb_gen to 3. Our invariant is now broken: CPU 0's percpu tlb_gen is now ahead of its actual TLB state. - CPU 0: Receive full flush IPI and skip the flush. Oops. I think my condition makes it clear that the invariants we need hold no matter it. > >> It could be converted to two full flushes or to just one, I think, >> depending on what order everything happens in. > > Right. One flush at the right time would be optimal. > >> But this approach of using three separate tlb_gen values seems to >> cover all the bases, and I don't think it's *that* bad. > > Sure. > > As I said in IRC, let's document that complexity then so that when we > stumble over it in the future, we at least know why it was done this > way. I've given it a try. Hopefully v4 is more clear.