From mboxrd@z Thu Jan 1 00:00:00 1970 From: Max Filippov Subject: Re: TLB and PTE coherency during munmap Date: Mon, 3 Jun 2013 13:16:47 +0400 Message-ID: References: <51A580E0.10300@gmail.com> <20130529101533.GF17767@MacBook-Pro.local> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Return-path: In-Reply-To: Sender: owner-linux-mm@kvack.org To: Catalin Marinas Cc: "linux-arch@vger.kernel.org" , "linux-mm@kvack.org" , "linux-xtensa@linux-xtensa.org" , Chris Zankel , Marc Gauthier List-Id: linux-arch.vger.kernel.org On Fri, May 31, 2013 at 5:26 AM, Max Filippov wrote: > On Wed, May 29, 2013 at 2:15 PM, Catalin Marinas > wrote: >> On Wed, May 29, 2013 at 05:15:28AM +0100, Max Filippov wrote: >>> On Tue, May 28, 2013 at 6:35 PM, Catalin Marinas wrote: >>> > On 26 May 2013 03:42, Max Filippov wrote: >>> >> Is it intentional that threads of a process that invoked munmap syscall >>> >> can see TLB entries pointing to already freed pages, or it is a bug? >>> > >>> > If it happens, this would be a bug. It means that a process can access >>> > a physical page that has been allocated to something else, possibly >>> > kernel data. >>> > >>> >> I'm talking about zap_pmd_range and zap_pte_range: >>> >> >>> >> zap_pmd_range >>> >> zap_pte_range >>> >> arch_enter_lazy_mmu_mode >>> >> ptep_get_and_clear_full >>> >> tlb_remove_tlb_entry >>> >> __tlb_remove_page >>> >> arch_leave_lazy_mmu_mode >>> >> cond_resched >>> >> >>> >> With the default arch_{enter,leave}_lazy_mmu_mode, tlb_remove_tlb_entry >>> >> and __tlb_remove_page there is a loop in the zap_pte_range that clears >>> >> PTEs and frees corresponding pages, but doesn't flush TLB, and >>> >> surrounding loop in the zap_pmd_range that calls cond_resched. If a thread >>> >> of the same process gets scheduled then it is able to see TLB entries >>> >> pointing to already freed physical pages. >>> > >>> > It looks to me like cond_resched() here introduces a possible bug but >>> > it depends on the actual arch code, especially the >>> > __tlb_remove_tlb_entry() function. On ARM we record the range in >>> > tlb_remove_tlb_entry() and queue the pages to be removed in >>> > __tlb_remove_page(). It pretty much acts like tlb_fast_mode() == 0 >>> > even for the UP case (which is also needed for hardware speculative >>> > TLB loads). The tlb_finish_mmu() takes care of whatever pages are left >>> > to be freed. >>> > >>> > With a dummy __tlb_remove_tlb_entry() and tlb_fast_mode() == 1, >>> > cond_resched() in zap_pmd_range() would cause problems. >>> >>> So, looks like most architectures in the UP configuration should have >>> this issue (unless they flush TLB in the switch_mm, even when switching >>> to the same mm): >> >> switch_mm() wouldn't be called if switching to the same mm. You could do > > Hmm... Strange, but as far as I can tell from the context_switch it would. > >> it in switch_to() but it's not efficient (or before returning to user >> space on the same processor). >> >> Do you happen to have a user-space test for this? Something like one > > I only had mtest05 from LTP that triggered TLB/PTE inconsistency, but > not anything that would really try to peek at the freed page. I can make > such test though. > >> thread does an mmap(), writes some poison value, munmap(). The other >> thread keeps checking the poison value while trapping and ignoring any >> SIGSEGV. If it's working correctly, the second thread should either get >> a SIGSEGV or read the poison value. I've made a number of such tests and had them running for a couple of days. Checking thread never read anything other than poison value so far. -- Thanks. -- Max -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f170.google.com ([209.85.214.170]:50548 "EHLO mail-ob0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752418Ab3FCJQs (ORCPT ); Mon, 3 Jun 2013 05:16:48 -0400 Received: by mail-ob0-f170.google.com with SMTP id ef5so6835829obb.29 for ; Mon, 03 Jun 2013 02:16:47 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: <51A580E0.10300@gmail.com> <20130529101533.GF17767@MacBook-Pro.local> Date: Mon, 3 Jun 2013 13:16:47 +0400 Message-ID: Subject: Re: TLB and PTE coherency during munmap From: Max Filippov Content-Type: text/plain; charset=UTF-8 Sender: linux-arch-owner@vger.kernel.org List-ID: To: Catalin Marinas Cc: "linux-arch@vger.kernel.org" , "linux-mm@kvack.org" , "linux-xtensa@linux-xtensa.org" , Chris Zankel , Marc Gauthier Message-ID: <20130603091647.9F1C7wlR4AFqWZmGcX5wSnpoMfcc51kTz8ZQv3hTQHs@z> On Fri, May 31, 2013 at 5:26 AM, Max Filippov wrote: > On Wed, May 29, 2013 at 2:15 PM, Catalin Marinas > wrote: >> On Wed, May 29, 2013 at 05:15:28AM +0100, Max Filippov wrote: >>> On Tue, May 28, 2013 at 6:35 PM, Catalin Marinas wrote: >>> > On 26 May 2013 03:42, Max Filippov wrote: >>> >> Is it intentional that threads of a process that invoked munmap syscall >>> >> can see TLB entries pointing to already freed pages, or it is a bug? >>> > >>> > If it happens, this would be a bug. It means that a process can access >>> > a physical page that has been allocated to something else, possibly >>> > kernel data. >>> > >>> >> I'm talking about zap_pmd_range and zap_pte_range: >>> >> >>> >> zap_pmd_range >>> >> zap_pte_range >>> >> arch_enter_lazy_mmu_mode >>> >> ptep_get_and_clear_full >>> >> tlb_remove_tlb_entry >>> >> __tlb_remove_page >>> >> arch_leave_lazy_mmu_mode >>> >> cond_resched >>> >> >>> >> With the default arch_{enter,leave}_lazy_mmu_mode, tlb_remove_tlb_entry >>> >> and __tlb_remove_page there is a loop in the zap_pte_range that clears >>> >> PTEs and frees corresponding pages, but doesn't flush TLB, and >>> >> surrounding loop in the zap_pmd_range that calls cond_resched. If a thread >>> >> of the same process gets scheduled then it is able to see TLB entries >>> >> pointing to already freed physical pages. >>> > >>> > It looks to me like cond_resched() here introduces a possible bug but >>> > it depends on the actual arch code, especially the >>> > __tlb_remove_tlb_entry() function. On ARM we record the range in >>> > tlb_remove_tlb_entry() and queue the pages to be removed in >>> > __tlb_remove_page(). It pretty much acts like tlb_fast_mode() == 0 >>> > even for the UP case (which is also needed for hardware speculative >>> > TLB loads). The tlb_finish_mmu() takes care of whatever pages are left >>> > to be freed. >>> > >>> > With a dummy __tlb_remove_tlb_entry() and tlb_fast_mode() == 1, >>> > cond_resched() in zap_pmd_range() would cause problems. >>> >>> So, looks like most architectures in the UP configuration should have >>> this issue (unless they flush TLB in the switch_mm, even when switching >>> to the same mm): >> >> switch_mm() wouldn't be called if switching to the same mm. You could do > > Hmm... Strange, but as far as I can tell from the context_switch it would. > >> it in switch_to() but it's not efficient (or before returning to user >> space on the same processor). >> >> Do you happen to have a user-space test for this? Something like one > > I only had mtest05 from LTP that triggered TLB/PTE inconsistency, but > not anything that would really try to peek at the freed page. I can make > such test though. > >> thread does an mmap(), writes some poison value, munmap(). The other >> thread keeps checking the poison value while trapping and ignoring any >> SIGSEGV. If it's working correctly, the second thread should either get >> a SIGSEGV or read the poison value. I've made a number of such tests and had them running for a couple of days. Checking thread never read anything other than poison value so far. -- Thanks. -- Max