From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 7E4E46B000D for ; Thu, 11 Oct 2018 11:20:47 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id f4-v6so8141245pff.2 for ; Thu, 11 Oct 2018 08:20:47 -0700 (PDT) Received: from mga09.intel.com (mga09.intel.com. [134.134.136.24]) by mx.google.com with ESMTPS id m7-v6si30485701pfi.286.2018.10.11.08.20.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Oct 2018 08:20:46 -0700 (PDT) From: Yu-cheng Yu Subject: [PATCH v5 07/27] mm/mmap: Create a guard area between VMAs Date: Thu, 11 Oct 2018 08:15:03 -0700 Message-Id: <20181011151523.27101-8-yu-cheng.yu@intel.com> In-Reply-To: <20181011151523.27101-1-yu-cheng.yu@intel.com> References: <20181011151523.27101-1-yu-cheng.yu@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Create a guard area between VMAs to detect memory corruption. Signed-off-by: Yu-cheng Yu --- include/linux/mm.h | 30 ++++++++++++++++++++---------- mm/Kconfig | 7 +++++++ 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0416a7204be3..53cfc104c0fb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2417,24 +2417,34 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { unsigned long vm_start = vma->vm_start; + unsigned long gap = 0; + + if (vma->vm_flags & VM_GROWSDOWN) + gap = stack_guard_gap; + else if (IS_ENABLED(CONFIG_VM_AREA_GUARD)) + gap = PAGE_SIZE; + + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } return vm_start; } static inline unsigned long vm_end_gap(struct vm_area_struct *vma) { unsigned long vm_end = vma->vm_end; + unsigned long gap = 0; + + if (vma->vm_flags & VM_GROWSUP) + gap = stack_guard_gap; + else if (IS_ENABLED(CONFIG_VM_AREA_GUARD)) + gap = PAGE_SIZE; + + vm_end += gap; + if (vm_end < vma->vm_end) + vm_end = -PAGE_SIZE; - if (vma->vm_flags & VM_GROWSUP) { - vm_end += stack_guard_gap; - if (vm_end < vma->vm_end) - vm_end = -PAGE_SIZE; - } return vm_end; } diff --git a/mm/Kconfig b/mm/Kconfig index de64ea658716..0cdcad65640d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -764,4 +764,11 @@ config GUP_BENCHMARK config ARCH_HAS_PTE_SPECIAL bool +config VM_AREA_GUARD + bool "VM area guard" + default n + help + Create a guard area between VM areas so that access beyond + limit can be detected. + endmenu -- 2.17.1