From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934703Ab3CNRwG (ORCPT ); Thu, 14 Mar 2013 13:52:06 -0400 Received: from mga02.intel.com ([134.134.136.20]:52023 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934597Ab3CNRtY (ORCPT ); Thu, 14 Mar 2013 13:49:24 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.84,845,1355126400"; d="scan'208";a="279559798" From: "Kirill A. Shutemov" To: Andrea Arcangeli , Andrew Morton , Al Viro , Hugh Dickins Cc: Wu Fengguang , Jan Kara , Mel Gorman , linux-mm@kvack.org, Andi Kleen , Matthew Wilcox , "Kirill A. Shutemov" , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv2, RFC 17/30] thp: wait_split_huge_page(): serialize over i_mmap_mutex too Date: Thu, 14 Mar 2013 19:50:22 +0200 Message-Id: <1363283435-7666-18-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" Since we're going to have huge pages backed by files, wait_split_huge_page() has to serialize not only over anon_vma_lock, but over i_mmap_mutex too. Signed-off-by: Kirill A. Shutemov --- include/linux/huge_mm.h | 15 ++++++++++++--- mm/huge_memory.c | 4 ++-- mm/memory.c | 4 ++-- 3 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a54939c..b53e295 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -113,11 +113,20 @@ extern void __split_huge_page_pmd(struct vm_area_struct *vma, __split_huge_page_pmd(__vma, __address, \ ____pmd); \ } while (0) -#define wait_split_huge_page(__anon_vma, __pmd) \ +#define wait_split_huge_page(__vma, __pmd) \ do { \ pmd_t *____pmd = (__pmd); \ - anon_vma_lock_write(__anon_vma); \ - anon_vma_unlock_write(__anon_vma); \ + struct address_space *__mapping = \ + vma->vm_file->f_mapping; \ + struct anon_vma *__anon_vma = (__vma)->anon_vma; \ + if (__mapping) \ + mutex_lock(&__mapping->i_mmap_mutex); \ + if (__anon_vma) { \ + anon_vma_lock_write(__anon_vma); \ + anon_vma_unlock_write(__anon_vma); \ + } \ + if (__mapping) \ + mutex_unlock(&__mapping->i_mmap_mutex); \ BUG_ON(pmd_trans_splitting(*____pmd) || \ pmd_trans_huge(*____pmd)); \ } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index eb777d3..a23da8b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -907,7 +907,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, spin_unlock(&dst_mm->page_table_lock); pte_free(dst_mm, pgtable); - wait_split_huge_page(vma->anon_vma, src_pmd); /* src_vma */ + wait_split_huge_page(vma, src_pmd); /* src_vma */ goto out; } src_page = pmd_page(pmd); @@ -1480,7 +1480,7 @@ int __pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) if (likely(pmd_trans_huge(*pmd))) { if (unlikely(pmd_trans_splitting(*pmd))) { spin_unlock(&vma->vm_mm->page_table_lock); - wait_split_huge_page(vma->anon_vma, pmd); + wait_split_huge_page(vma, pmd); return -1; } else { /* Thp mapped by 'pmd' is stable, so we can diff --git a/mm/memory.c b/mm/memory.c index 98c25dd..52bd6cf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -619,7 +619,7 @@ int __pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, if (new) pte_free(mm, new); if (wait_split_huge_page) - wait_split_huge_page(vma->anon_vma, pmd); + wait_split_huge_page(vma, pmd); return 0; } @@ -1529,7 +1529,7 @@ struct page *follow_page_mask(struct vm_area_struct *vma, if (likely(pmd_trans_huge(*pmd))) { if (unlikely(pmd_trans_splitting(*pmd))) { spin_unlock(&mm->page_table_lock); - wait_split_huge_page(vma->anon_vma, pmd); + wait_split_huge_page(vma, pmd); } else { page = follow_trans_huge_pmd(vma, address, pmd, flags); -- 1.7.10.4 From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Kirill A. Shutemov" Subject: [PATCHv2, RFC 17/30] thp: wait_split_huge_page(): serialize over i_mmap_mutex too Date: Thu, 14 Mar 2013 19:50:22 +0200 Message-ID: <1363283435-7666-18-git-send-email-kirill.shutemov@linux.intel.com> References: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> Cc: Wu Fengguang , Jan Kara , Mel Gorman , linux-mm@kvack.org, Andi Kleen , Matthew Wilcox , "Kirill A. Shutemov" , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" To: Andrea Arcangeli , Andrew Morton , Al Viro , Hugh Dickins Return-path: In-Reply-To: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-Id: linux-fsdevel.vger.kernel.org From: "Kirill A. Shutemov" Since we're going to have huge pages backed by files, wait_split_huge_page() has to serialize not only over anon_vma_lock, but over i_mmap_mutex too. Signed-off-by: Kirill A. Shutemov --- include/linux/huge_mm.h | 15 ++++++++++++--- mm/huge_memory.c | 4 ++-- mm/memory.c | 4 ++-- 3 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a54939c..b53e295 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -113,11 +113,20 @@ extern void __split_huge_page_pmd(struct vm_area_struct *vma, __split_huge_page_pmd(__vma, __address, \ ____pmd); \ } while (0) -#define wait_split_huge_page(__anon_vma, __pmd) \ +#define wait_split_huge_page(__vma, __pmd) \ do { \ pmd_t *____pmd = (__pmd); \ - anon_vma_lock_write(__anon_vma); \ - anon_vma_unlock_write(__anon_vma); \ + struct address_space *__mapping = \ + vma->vm_file->f_mapping; \ + struct anon_vma *__anon_vma = (__vma)->anon_vma; \ + if (__mapping) \ + mutex_lock(&__mapping->i_mmap_mutex); \ + if (__anon_vma) { \ + anon_vma_lock_write(__anon_vma); \ + anon_vma_unlock_write(__anon_vma); \ + } \ + if (__mapping) \ + mutex_unlock(&__mapping->i_mmap_mutex); \ BUG_ON(pmd_trans_splitting(*____pmd) || \ pmd_trans_huge(*____pmd)); \ } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index eb777d3..a23da8b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -907,7 +907,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, spin_unlock(&dst_mm->page_table_lock); pte_free(dst_mm, pgtable); - wait_split_huge_page(vma->anon_vma, src_pmd); /* src_vma */ + wait_split_huge_page(vma, src_pmd); /* src_vma */ goto out; } src_page = pmd_page(pmd); @@ -1480,7 +1480,7 @@ int __pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) if (likely(pmd_trans_huge(*pmd))) { if (unlikely(pmd_trans_splitting(*pmd))) { spin_unlock(&vma->vm_mm->page_table_lock); - wait_split_huge_page(vma->anon_vma, pmd); + wait_split_huge_page(vma, pmd); return -1; } else { /* Thp mapped by 'pmd' is stable, so we can diff --git a/mm/memory.c b/mm/memory.c index 98c25dd..52bd6cf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -619,7 +619,7 @@ int __pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, if (new) pte_free(mm, new); if (wait_split_huge_page) - wait_split_huge_page(vma->anon_vma, pmd); + wait_split_huge_page(vma, pmd); return 0; } @@ -1529,7 +1529,7 @@ struct page *follow_page_mask(struct vm_area_struct *vma, if (likely(pmd_trans_huge(*pmd))) { if (unlikely(pmd_trans_splitting(*pmd))) { spin_unlock(&mm->page_table_lock); - wait_split_huge_page(vma->anon_vma, pmd); + wait_split_huge_page(vma, pmd); } else { page = follow_trans_huge_pmd(vma, address, pmd, flags); -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org