From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9073ECDFB4 for ; Tue, 17 Jul 2018 23:08:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7421D20647 for ; Tue, 17 Jul 2018 23:08:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7421D20647 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731236AbeGQXmw (ORCPT ); Tue, 17 Jul 2018 19:42:52 -0400 Received: from mga11.intel.com ([192.55.52.93]:56818 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730063AbeGQXmw (ORCPT ); Tue, 17 Jul 2018 19:42:52 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jul 2018 16:07:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,367,1526367600"; d="scan'208";a="75608409" Received: from 2b52.sc.intel.com ([143.183.136.146]) by orsmga002.jf.intel.com with ESMTP; 17 Jul 2018 16:07:18 -0700 Message-ID: <1531868610.3541.21.camel@intel.com> Subject: Re: [RFC PATCH v2 16/27] mm: Modify can_follow_write_pte/pmd for shadow stack From: Yu-cheng Yu To: Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue Date: Tue, 17 Jul 2018 16:03:30 -0700 In-Reply-To: <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> References: <20180710222639.8241-1-yu-cheng.yu@intel.com> <20180710222639.8241-17-yu-cheng.yu@intel.com> <1531328731.15351.3.camel@intel.com> <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.18.5.2-0ubuntu3.2 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2018-07-13 at 11:26 -0700, Dave Hansen wrote: > On 07/11/2018 10:05 AM, Yu-cheng Yu wrote: > > > > My understanding is that we don't want to follow write pte if the page > > is shared as read-only.  For a SHSTK page, that is (R/O + DIRTY_SW), > > which means the SHSTK page has not been COW'ed.  Is that right? > Let's look at the code again: > > > > > -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) > > +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, > > + bool shstk) > >  { > > + bool pte_cowed = shstk ? is_shstk_pte(pte):pte_dirty(pte); > > + > >   return pte_write(pte) || > > - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); > > + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_cowed); > >  } > This is another case where the naming of pte_*() is biting us vs. the > perversion of the PTE bits.  The lack of comments and explanation inthe > patch is compounding the confusion. > > We need to find a way to differentiate "someone can write to this PTE" > from "the write bit is set in this PTE". > > In this particular hunk, we need to make it clear that pte_write() is > *never* true for shadowstack PTEs.  In other words, shadow stack VMAs > will (should?) never even *see* a pte_write() PTE. > > I think this is a case where you just need to bite the bullet and > bifurcate can_follow_write_pte().  Just separate the shadowstack and > non-shadowstack parts. In case I don't understand the exact issue. What about the following. diff --git a/mm/gup.c b/mm/gup.c index fc5f98069f4e..45a0837b27f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -70,6 +70,12 @@ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)   ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));  }   +static inline bool can_follow_write_shstk_pte(pte_t pte, unsigned int flags) +{ + return ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + is_shstk_pte(pte)); +} +  static struct page *follow_page_pte(struct vm_area_struct *vma,   unsigned long address, pmd_t *pmd, unsigned int flags)  { @@ -105,9 +111,16 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,   }   if ((flags & FOLL_NUMA) && pte_protnone(pte))   goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { - pte_unmap_unlock(ptep, ptl); - return NULL; + if (flags & FOLL_WRITE) { + if (is_shstk_mapping(vma->vm_flags)) { + if (!can_follow_write_shstk_pte(pte, flags)) { + pte_unmap_unlock(ptep, ptl); + return NULL; + } + } else if (!can_follow_write_pte(pte, flags) { + pte_unmap_unlock(ptep, ptl); + return NULL; + }   }     page = vm_normal_page(vma, address, pte); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-5.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable autolearn_force=no version=3.4.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id BD7F07D071 for ; Tue, 17 Jul 2018 23:08:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730249AbeGQXmw (ORCPT ); Tue, 17 Jul 2018 19:42:52 -0400 Received: from mga11.intel.com ([192.55.52.93]:56818 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730063AbeGQXmw (ORCPT ); Tue, 17 Jul 2018 19:42:52 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jul 2018 16:07:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,367,1526367600"; d="scan'208";a="75608409" Received: from 2b52.sc.intel.com ([143.183.136.146]) by orsmga002.jf.intel.com with ESMTP; 17 Jul 2018 16:07:18 -0700 Message-ID: <1531868610.3541.21.camel@intel.com> Subject: Re: [RFC PATCH v2 16/27] mm: Modify can_follow_write_pte/pmd for shadow stack From: Yu-cheng Yu To: Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue Date: Tue, 17 Jul 2018 16:03:30 -0700 In-Reply-To: <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> References: <20180710222639.8241-1-yu-cheng.yu@intel.com> <20180710222639.8241-17-yu-cheng.yu@intel.com> <1531328731.15351.3.camel@intel.com> <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.18.5.2-0ubuntu3.2 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Fri, 2018-07-13 at 11:26 -0700, Dave Hansen wrote: > On 07/11/2018 10:05 AM, Yu-cheng Yu wrote: > > > > My understanding is that we don't want to follow write pte if the page > > is shared as read-only.  For a SHSTK page, that is (R/O + DIRTY_SW), > > which means the SHSTK page has not been COW'ed.  Is that right? > Let's look at the code again: > > > > > -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) > > +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, > > + bool shstk) > >  { > > + bool pte_cowed = shstk ? is_shstk_pte(pte):pte_dirty(pte); > > + > >   return pte_write(pte) || > > - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); > > + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_cowed); > >  } > This is another case where the naming of pte_*() is biting us vs. the > perversion of the PTE bits.  The lack of comments and explanation inthe > patch is compounding the confusion. > > We need to find a way to differentiate "someone can write to this PTE" > from "the write bit is set in this PTE". > > In this particular hunk, we need to make it clear that pte_write() is > *never* true for shadowstack PTEs.  In other words, shadow stack VMAs > will (should?) never even *see* a pte_write() PTE. > > I think this is a case where you just need to bite the bullet and > bifurcate can_follow_write_pte().  Just separate the shadowstack and > non-shadowstack parts. In case I don't understand the exact issue. What about the following. diff --git a/mm/gup.c b/mm/gup.c index fc5f98069f4e..45a0837b27f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -70,6 +70,12 @@ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)   ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));  }   +static inline bool can_follow_write_shstk_pte(pte_t pte, unsigned int flags) +{ + return ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + is_shstk_pte(pte)); +} +  static struct page *follow_page_pte(struct vm_area_struct *vma,   unsigned long address, pmd_t *pmd, unsigned int flags)  { @@ -105,9 +111,16 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,   }   if ((flags & FOLL_NUMA) && pte_protnone(pte))   goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { - pte_unmap_unlock(ptep, ptl); - return NULL; + if (flags & FOLL_WRITE) { + if (is_shstk_mapping(vma->vm_flags)) { + if (!can_follow_write_shstk_pte(pte, flags)) { + pte_unmap_unlock(ptep, ptl); + return NULL; + } + } else if (!can_follow_write_pte(pte, flags) { + pte_unmap_unlock(ptep, ptl); + return NULL; + }   }     page = vm_normal_page(vma, address, pte); -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yu-cheng Yu Subject: Re: [RFC PATCH v2 16/27] mm: Modify can_follow_write_pte/pmd for shadow stack Date: Tue, 17 Jul 2018 16:03:30 -0700 Message-ID: <1531868610.3541.21.camel@intel.com> References: <20180710222639.8241-1-yu-cheng.yu@intel.com> <20180710222639.8241-17-yu-cheng.yu@intel.com> <1531328731.15351.3.camel@intel.com> <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit Return-path: In-Reply-To: <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org To: Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek Peter List-Id: linux-api@vger.kernel.org On Fri, 2018-07-13 at 11:26 -0700, Dave Hansen wrote: > On 07/11/2018 10:05 AM, Yu-cheng Yu wrote: > > > > My understanding is that we don't want to follow write pte if the page > > is shared as read-only.  For a SHSTK page, that is (R/O + DIRTY_SW), > > which means the SHSTK page has not been COW'ed.  Is that right? > Let's look at the code again: > > > > > -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) > > +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, > > + bool shstk) > >  { > > + bool pte_cowed = shstk ? is_shstk_pte(pte):pte_dirty(pte); > > + > >   return pte_write(pte) || > > - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); > > + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_cowed); > >  } > This is another case where the naming of pte_*() is biting us vs. the > perversion of the PTE bits.  The lack of comments and explanation inthe > patch is compounding the confusion. > > We need to find a way to differentiate "someone can write to this PTE" > from "the write bit is set in this PTE". > > In this particular hunk, we need to make it clear that pte_write() is > *never* true for shadowstack PTEs.  In other words, shadow stack VMAs > will (should?) never even *see* a pte_write() PTE. > > I think this is a case where you just need to bite the bullet and > bifurcate can_follow_write_pte().  Just separate the shadowstack and > non-shadowstack parts. In case I don't understand the exact issue. What about the following. diff --git a/mm/gup.c b/mm/gup.c index fc5f98069f4e..45a0837b27f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -70,6 +70,12 @@ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)   ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));  }   +static inline bool can_follow_write_shstk_pte(pte_t pte, unsigned int flags) +{ + return ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + is_shstk_pte(pte)); +} +  static struct page *follow_page_pte(struct vm_area_struct *vma,   unsigned long address, pmd_t *pmd, unsigned int flags)  { @@ -105,9 +111,16 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,   }   if ((flags & FOLL_NUMA) && pte_protnone(pte))   goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { - pte_unmap_unlock(ptep, ptl); - return NULL; + if (flags & FOLL_WRITE) { + if (is_shstk_mapping(vma->vm_flags)) { + if (!can_follow_write_shstk_pte(pte, flags)) { + pte_unmap_unlock(ptep, ptl); + return NULL; + } + } else if (!can_follow_write_pte(pte, flags) { + pte_unmap_unlock(ptep, ptl); + return NULL; + }   }     page = vm_normal_page(vma, address, pte); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id C2BC76B0006 for ; Tue, 17 Jul 2018 19:08:00 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id o16-v6so1033391pgv.21 for ; Tue, 17 Jul 2018 16:08:00 -0700 (PDT) Received: from mga06.intel.com (mga06.intel.com. [134.134.136.31]) by mx.google.com with ESMTPS id 61-v6si1928232plf.63.2018.07.17.16.07.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Jul 2018 16:07:59 -0700 (PDT) Message-ID: <1531868610.3541.21.camel@intel.com> Subject: Re: [RFC PATCH v2 16/27] mm: Modify can_follow_write_pte/pmd for shadow stack From: Yu-cheng Yu Date: Tue, 17 Jul 2018 16:03:30 -0700 In-Reply-To: <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> References: <20180710222639.8241-1-yu-cheng.yu@intel.com> <20180710222639.8241-17-yu-cheng.yu@intel.com> <1531328731.15351.3.camel@intel.com> <45a85b01-e005-8cb6-af96-b23ce9b5fca7@linux.intel.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue On Fri, 2018-07-13 at 11:26 -0700, Dave Hansen wrote: > On 07/11/2018 10:05 AM, Yu-cheng Yu wrote: > > > > My understanding is that we don't want to follow write pte if the page > > is shared as read-only. A For a SHSTK page, that is (R/O + DIRTY_SW), > > which means the SHSTK page has not been COW'ed. A Is that right? > Let's look at the code again: > > > > > -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) > > +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, > > + bool shstk) > > A { > > + bool pte_cowed = shstk ? is_shstk_pte(pte):pte_dirty(pte); > > + > > A return pte_write(pte) || > > - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); > > + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_cowed); > > A } > This is another case where the naming of pte_*() is biting us vs. the > perversion of the PTE bits.A A The lack of comments and explanation inthe > patch is compounding the confusion. > > We need to find a way to differentiate "someone can write to this PTE" > from "the write bit is set in this PTE". > > In this particular hunk, we need to make it clear that pte_write() is > *never* true for shadowstack PTEs.A A In other words, shadow stack VMAs > will (should?) never even *see* a pte_write() PTE. > > I think this is a case where you just need to bite the bullet and > bifurcate can_follow_write_pte().A A Just separate the shadowstack and > non-shadowstack parts. In case I don't understand the exact issue. What about the following. diff --git a/mm/gup.c b/mm/gup.c index fc5f98069f4e..45a0837b27f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -70,6 +70,12 @@ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) A ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); A } A +static inline bool can_follow_write_shstk_pte(pte_t pte, unsigned int flags) +{ + return ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + is_shstk_pte(pte)); +} + A static struct page *follow_page_pte(struct vm_area_struct *vma, A unsigned long address, pmd_t *pmd, unsigned int flags) A { @@ -105,9 +111,16 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, A } A if ((flags & FOLL_NUMA) && pte_protnone(pte)) A goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { - pte_unmap_unlock(ptep, ptl); - return NULL; + if (flags & FOLL_WRITE) { + if (is_shstk_mapping(vma->vm_flags)) { + if (!can_follow_write_shstk_pte(pte, flags)) { + pte_unmap_unlock(ptep, ptl); + return NULL; + } + } else if (!can_follow_write_pte(pte, flags) { + pte_unmap_unlock(ptep, ptl); + return NULL; + } A } A A page = vm_normal_page(vma, address, pte);