From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753932Ab2JOGAk (ORCPT ); Mon, 15 Oct 2012 02:00:40 -0400 Received: from mga03.intel.com ([143.182.124.21]:51834 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753812Ab2JOGAh (ORCPT ); Mon, 15 Oct 2012 02:00:37 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.80,587,1344236400"; d="scan'208";a="204457311" From: "Kirill A. Shutemov" To: Andrew Morton , Andrea Arcangeli , linux-mm@kvack.org Cc: Andi Kleen , "H. Peter Anvin" , linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , "Kirill A. Shutemov" Subject: [PATCH v4 08/10] thp: setup huge zero page on non-write page fault Date: Mon, 15 Oct 2012 09:00:57 +0300 Message-Id: <1350280859-18801-9-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1350280859-18801-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1350280859-18801-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" All code paths seems covered. Now we can map huge zero page on read page fault. We setup it in do_huge_pmd_anonymous_page() if area around fault address is suitable for THP and we've got read page fault. If we fail to setup huge zero page (ENOMEM) we fallback to handle_pte_fault() as we normally do in THP. Signed-off-by: Kirill A. Shutemov --- mm/huge_memory.c | 10 ++++++++++ 1 files changed, 10 insertions(+), 0 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b267b12..da7e07b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -725,6 +725,16 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma))) return VM_FAULT_OOM; + if (!(flags & FAULT_FLAG_WRITE)) { + pgtable_t pgtable; + pgtable = pte_alloc_one(mm, haddr); + if (unlikely(!pgtable)) + goto out; + spin_lock(&mm->page_table_lock); + set_huge_zero_page(pgtable, mm, vma, haddr, pmd); + spin_unlock(&mm->page_table_lock); + return 0; + } page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), vma, haddr, numa_node_id(), 0); if (unlikely(!page)) { -- 1.7.7.6