From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751479Ab1IZNOn (ORCPT ); Mon, 26 Sep 2011 09:14:43 -0400 Received: from acsinet15.oracle.com ([141.146.126.227]:41251 "EHLO acsinet15.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750750Ab1IZNOm (ORCPT ); Mon, 26 Sep 2011 09:14:42 -0400 From: Konrad Rzeszutek Wilk To: linux-kernel@vger.kernel.org, jeremy@goop.org Cc: xen-devel@lists.xensource.com, Konrad Rzeszutek Wilk , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Peter Zijlstra , Jeremy Fitzhardinge , stable@kernel.org Subject: [PATCH] x86/paravirt: Partially revert "remove lazy mode in interrupts" Date: Mon, 26 Sep 2011 09:13:17 -0400 Message-Id: <1317042797-19975-1-git-send-email-konrad.wilk@oracle.com> X-Mailer: git-send-email 1.7.4.1 X-Source-IP: rtcsinet21.oracle.com [66.248.204.29] X-CT-RefId: str=0001.0A090202.4E807A8D.00D9,ss=1,re=0.000,fgs=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org which has git commit b8bcfe997e46150fedcc3f5b26b846400122fdd9. The unintended consequence of removing the flushing of MMU updates when doing kmap_atomic (or kunmap_atomic) is that we can hit a dereference bug when processing a "fork()" under a heavy loaded machine. Specifically we can hit: BUG: unable to handle kernel paging request at f573fc8c IP: [] swap_count_continued+0x104/0x180 *pdpt = 000000002a3b9027 *pde = 0000000001bed067 *pte = 0000000000000000 Oops: 0000 [#1] SMP Modules linked in: Pid: 1638, comm: apache2 Not tainted 3.0.4-linode37 #1 EIP: 0061:[] EFLAGS: 00210246 CPU: 3 EIP is at swap_count_continued+0x104/0x180 .. snip.. Call Trace: [] ? __swap_duplicate+0xc2/0x160 [] ? pte_mfn_to_pfn+0x87/0xe0 [] ? swap_duplicate+0x14/0x40 [] ? copy_pte_range+0x45b/0x500 [] ? copy_page_range+0x195/0x200 [] ? dup_mmap+0x1c6/0x2c0 [] ? dup_mm+0xa8/0x130 [] ? copy_process+0x98a/0xb30 [] ? do_fork+0x4f/0x280 [] ? getnstimeofday+0x43/0x100 [] ? sys_clone+0x30/0x40 [] ? ptregs_clone+0x15/0x48 [] ? syscall_call+0x7/0xb The problem looks that in copy_page_range we turn lazy mode on, and then in swap_entry_free we call swap_count_continued which ends up in: map = kmap_atomic(page, KM_USER0) + offset; and then later touches *map. Since we are running in batched mode (lazy) we don't actually set up the PTE mappings and the kmap_atomic is not done synchronously and ends up trying to dereference a page that has not been set. Looking at kmap_atomic_prot_pfn, it uses 'arch_flush_lazy_mmu_mode' and sprinkling that in kmap_atomic_prot and __kunmap_atomic makes the problem go away. CC: Thomas Gleixner CC: Ingo Molnar CC: "H. Peter Anvin" CC: x86@kernel.org CC: Peter Zijlstra CC: Jeremy Fitzhardinge CC: stable@kernel.org Signed-off-by: Konrad Rzeszutek Wilk --- arch/x86/mm/highmem_32.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c index b499626..f4f29b1 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -45,6 +45,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot) vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); BUG_ON(!pte_none(*(kmap_pte-idx))); set_pte(kmap_pte-idx, mk_pte(page, prot)); + arch_flush_lazy_mmu_mode(); return (void *)vaddr; } @@ -88,6 +89,7 @@ void __kunmap_atomic(void *kvaddr) */ kpte_clear_flush(kmap_pte-idx, vaddr); kmap_atomic_idx_pop(); + arch_flush_lazy_mmu_mode(); } #ifdef CONFIG_DEBUG_HIGHMEM else { -- 1.7.4.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: [PATCH] x86/paravirt: Partially revert "remove lazy mode in interrupts" Date: Mon, 26 Sep 2011 09:13:17 -0400 Message-ID: <1317042797-19975-1-git-send-email-konrad.wilk@oracle.com> Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: linux-kernel@vger.kernel.org, jeremy@goop.org Cc: xen-devel@lists.xensource.com, Peter Zijlstra , Konrad Rzeszutek Wilk , x86@kernel.org, Jeremy Fitzhardinge , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , stable@kernel.org List-Id: xen-devel@lists.xenproject.org which has git commit b8bcfe997e46150fedcc3f5b26b846400122fdd9. The unintended consequence of removing the flushing of MMU updates when doing kmap_atomic (or kunmap_atomic) is that we can hit a dereference bug when processing a "fork()" under a heavy loaded machine. Specifically we can hit: BUG: unable to handle kernel paging request at f573fc8c IP: [] swap_count_continued+0x104/0x180 *pdpt = 000000002a3b9027 *pde = 0000000001bed067 *pte = 0000000000000000 Oops: 0000 [#1] SMP Modules linked in: Pid: 1638, comm: apache2 Not tainted 3.0.4-linode37 #1 EIP: 0061:[] EFLAGS: 00210246 CPU: 3 EIP is at swap_count_continued+0x104/0x180 .. snip.. Call Trace: [] ? __swap_duplicate+0xc2/0x160 [] ? pte_mfn_to_pfn+0x87/0xe0 [] ? swap_duplicate+0x14/0x40 [] ? copy_pte_range+0x45b/0x500 [] ? copy_page_range+0x195/0x200 [] ? dup_mmap+0x1c6/0x2c0 [] ? dup_mm+0xa8/0x130 [] ? copy_process+0x98a/0xb30 [] ? do_fork+0x4f/0x280 [] ? getnstimeofday+0x43/0x100 [] ? sys_clone+0x30/0x40 [] ? ptregs_clone+0x15/0x48 [] ? syscall_call+0x7/0xb The problem looks that in copy_page_range we turn lazy mode on, and then in swap_entry_free we call swap_count_continued which ends up in: map = kmap_atomic(page, KM_USER0) + offset; and then later touches *map. Since we are running in batched mode (lazy) we don't actually set up the PTE mappings and the kmap_atomic is not done synchronously and ends up trying to dereference a page that has not been set. Looking at kmap_atomic_prot_pfn, it uses 'arch_flush_lazy_mmu_mode' and sprinkling that in kmap_atomic_prot and __kunmap_atomic makes the problem go away. CC: Thomas Gleixner CC: Ingo Molnar CC: "H. Peter Anvin" CC: x86@kernel.org CC: Peter Zijlstra CC: Jeremy Fitzhardinge CC: stable@kernel.org Signed-off-by: Konrad Rzeszutek Wilk --- arch/x86/mm/highmem_32.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c index b499626..f4f29b1 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -45,6 +45,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot) vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); BUG_ON(!pte_none(*(kmap_pte-idx))); set_pte(kmap_pte-idx, mk_pte(page, prot)); + arch_flush_lazy_mmu_mode(); return (void *)vaddr; } @@ -88,6 +89,7 @@ void __kunmap_atomic(void *kvaddr) */ kpte_clear_flush(kmap_pte-idx, vaddr); kmap_atomic_idx_pop(); + arch_flush_lazy_mmu_mode(); } #ifdef CONFIG_DEBUG_HIGHMEM else { -- 1.7.4.1