All of lore.kernel.org
 help / color / mirror / Atom feed
diff for duplicates of <1535644924.26689.7.camel@intel.com>

diff --git a/a/content_digest b/N1/content_digest
index cdfb3b5..e7244b0 100644
--- a/a/content_digest
+++ b/N1/content_digest
@@ -41,10 +41,7 @@
   " Mike Kravetz <mike.kravetz\@oracle.com>",
   " Nadav Amit <nadav.amit\@gmail.com>",
   " Oleg Nesterov <oleg\@redhat.com>",
-  " Pavel Machek <pavel\@ucw.cz>",
-  " Peter Zijlstra <peterz\@infradead.org>",
-  " ravi.v.shankar\@intel.com",
-  " vedvyas.shanbhogue\@intel.com\0"
+  " Pavel Machek <pave>\0"
 ]
 [
   "\0000:1\0"
@@ -122,4 +119,4 @@
   "Yu-cheng"
 ]
 
-a2ffb47baf57650a1efc374dd385dddf3ab81b947c20c1d8d753b13928da57d6
+6aac0add097d6759b4bb1332a9f8c22c520afd954935a52424a98519cd235405

diff --git a/a/1.txt b/N2/1.txt
index 32296cc..d7ae8e3 100644
--- a/a/1.txt
+++ b/N2/1.txt
@@ -4,16 +4,16 @@ On Thu, 2018-08-30 at 17:49 +0200, Jann Horn wrote:
 > > 
 > > 
 > > When Shadow Stack is enabled, the read-only and PAGE_DIRTY_HW PTE
-> > setting is reserved only for the Shadow Stack.  To track dirty of
+> > setting is reserved only for the Shadow Stack.A A To track dirty of
 > > non-Shadow Stack read-only PTEs, we use PAGE_DIRTY_SW.
 > > 
 > > Update ptep_set_wrprotect() and pmdp_set_wrprotect().
 > > 
 > > Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
 > > ---
-> >  arch/x86/include/asm/pgtable.h | 42
+> > A arch/x86/include/asm/pgtable.h | 42
 > > ++++++++++++++++++++++++++++++++++
-> >  1 file changed, 42 insertions(+)
+> > A 1 file changed, 42 insertions(+)
 > > 
 > > diff --git a/arch/x86/include/asm/pgtable.h
 > > b/arch/x86/include/asm/pgtable.h
@@ -22,38 +22,38 @@ On Thu, 2018-08-30 at 17:49 +0200, Jann Horn wrote:
 > > +++ b/arch/x86/include/asm/pgtable.h
 > > @@ -1203,7 +1203,28 @@ static inline pte_t
 > > ptep_get_and_clear_full(struct mm_struct *mm,
-> >  static inline void ptep_set_wrprotect(struct mm_struct *mm,
-> >                                       unsigned long addr, pte_t
+> > A static inline void ptep_set_wrprotect(struct mm_struct *mm,
+> > A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A unsigned long addr, pte_t
 > > *ptep)
-> >  {
-> > +       pte_t pte;
+> > A {
+> > +A A A A A A A pte_t pte;
 > > +
-> >         clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);
-> > +       pte = *ptep;
+> > A A A A A A A A clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);
+> > +A A A A A A A pte = *ptep;
 > > +
-> > +       /*
-> > +        * Some processors can start a write, but ending up seeing
-> > +        * a read-only PTE by the time they get to the Dirty bit.
-> > +        * In this case, they will set the Dirty bit, leaving a
-> > +        * read-only, Dirty PTE which looks like a Shadow Stack
+> > +A A A A A A A /*
+> > +A A A A A A A A * Some processors can start a write, but ending up seeing
+> > +A A A A A A A A * a read-only PTE by the time they get to the Dirty bit.
+> > +A A A A A A A A * In this case, they will set the Dirty bit, leaving a
+> > +A A A A A A A A * read-only, Dirty PTE which looks like a Shadow Stack
 > > PTE.
-> > +        *
-> > +        * However, this behavior has been improved and will not
+> > +A A A A A A A A *
+> > +A A A A A A A A * However, this behavior has been improved and will not
 > > occur
-> > +        * on processors supporting Shadow Stacks.  Without this
-> > +        * guarantee, a transition to a non-present PTE and flush
+> > +A A A A A A A A * on processors supporting Shadow Stacks.A A Without this
+> > +A A A A A A A A * guarantee, a transition to a non-present PTE and flush
 > > the
-> > +        * TLB would be needed.
-> > +        *
-> > +        * When change a writable PTE to read-only and if the PTE
+> > +A A A A A A A A * TLB would be needed.
+> > +A A A A A A A A *
+> > +A A A A A A A A * When change a writable PTE to read-only and if the PTE
 > > has
-> > +        * _PAGE_DIRTY_HW set, we move that bit to _PAGE_DIRTY_SW
+> > +A A A A A A A A * _PAGE_DIRTY_HW set, we move that bit to _PAGE_DIRTY_SW
 > > so
-> > +        * that the PTE is not a valid Shadow Stack PTE.
-> > +        */
-> > +       pte = pte_move_flags(pte, _PAGE_DIRTY_HW, _PAGE_DIRTY_SW);
-> > +       set_pte_at(mm, addr, ptep, pte);
-> >  }
+> > +A A A A A A A A * that the PTE is not a valid Shadow Stack PTE.
+> > +A A A A A A A A */
+> > +A A A A A A A pte = pte_move_flags(pte, _PAGE_DIRTY_HW, _PAGE_DIRTY_SW);
+> > +A A A A A A A set_pte_at(mm, addr, ptep, pte);
+> > A }
 > I don't understand why it's okay that you first atomically clear the
 > RW bit, then atomically switch from DIRTY_HW to DIRTY_SW. Doesn't
 > that
diff --git a/a/content_digest b/N2/content_digest
index cdfb3b5..15429d6 100644
--- a/a/content_digest
+++ b/N2/content_digest
@@ -59,16 +59,16 @@
   "> > \n",
   "> > \n",
   "> > When Shadow Stack is enabled, the read-only and PAGE_DIRTY_HW PTE\n",
-  "> > setting is reserved only for the Shadow Stack.\302\240\302\240To track dirty of\n",
+  "> > setting is reserved only for the Shadow Stack.A A To track dirty of\n",
   "> > non-Shadow Stack read-only PTEs, we use PAGE_DIRTY_SW.\n",
   "> > \n",
   "> > Update ptep_set_wrprotect() and pmdp_set_wrprotect().\n",
   "> > \n",
   "> > Signed-off-by: Yu-cheng Yu <yu-cheng.yu\@intel.com>\n",
   "> > ---\n",
-  "> > \302\240arch/x86/include/asm/pgtable.h | 42\n",
+  "> > A arch/x86/include/asm/pgtable.h | 42\n",
   "> > ++++++++++++++++++++++++++++++++++\n",
-  "> > \302\2401 file changed, 42 insertions(+)\n",
+  "> > A 1 file changed, 42 insertions(+)\n",
   "> > \n",
   "> > diff --git a/arch/x86/include/asm/pgtable.h\n",
   "> > b/arch/x86/include/asm/pgtable.h\n",
@@ -77,38 +77,38 @@
   "> > +++ b/arch/x86/include/asm/pgtable.h\n",
   "> > \@\@ -1203,7 +1203,28 \@\@ static inline pte_t\n",
   "> > ptep_get_and_clear_full(struct mm_struct *mm,\n",
-  "> > \302\240static inline void ptep_set_wrprotect(struct mm_struct *mm,\n",
-  "> > \302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240unsigned long addr, pte_t\n",
+  "> > A static inline void ptep_set_wrprotect(struct mm_struct *mm,\n",
+  "> > A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A unsigned long addr, pte_t\n",
   "> > *ptep)\n",
-  "> > \302\240{\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240pte_t pte;\n",
+  "> > A {\n",
+  "> > +A A A A A A A pte_t pte;\n",
   "> > +\n",
-  "> > \302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240pte = *ptep;\n",
+  "> > A A A A A A A A clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);\n",
+  "> > +A A A A A A A pte = *ptep;\n",
   "> > +\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240/*\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* Some processors can start a write, but ending up seeing\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* a read-only PTE by the time they get to the Dirty bit.\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* In this case, they will set the Dirty bit, leaving a\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* read-only, Dirty PTE which looks like a Shadow Stack\n",
+  "> > +A A A A A A A /*\n",
+  "> > +A A A A A A A A * Some processors can start a write, but ending up seeing\n",
+  "> > +A A A A A A A A * a read-only PTE by the time they get to the Dirty bit.\n",
+  "> > +A A A A A A A A * In this case, they will set the Dirty bit, leaving a\n",
+  "> > +A A A A A A A A * read-only, Dirty PTE which looks like a Shadow Stack\n",
   "> > PTE.\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240*\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* However, this behavior has been improved and will not\n",
+  "> > +A A A A A A A A *\n",
+  "> > +A A A A A A A A * However, this behavior has been improved and will not\n",
   "> > occur\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* on processors supporting Shadow Stacks.\302\240\302\240Without this\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* guarantee, a transition to a non-present PTE and flush\n",
+  "> > +A A A A A A A A * on processors supporting Shadow Stacks.A A Without this\n",
+  "> > +A A A A A A A A * guarantee, a transition to a non-present PTE and flush\n",
   "> > the\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* TLB would be needed.\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240*\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* When change a writable PTE to read-only and if the PTE\n",
+  "> > +A A A A A A A A * TLB would be needed.\n",
+  "> > +A A A A A A A A *\n",
+  "> > +A A A A A A A A * When change a writable PTE to read-only and if the PTE\n",
   "> > has\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* _PAGE_DIRTY_HW set, we move that bit to _PAGE_DIRTY_SW\n",
+  "> > +A A A A A A A A * _PAGE_DIRTY_HW set, we move that bit to _PAGE_DIRTY_SW\n",
   "> > so\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240* that the PTE is not a valid Shadow Stack PTE.\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240\302\240*/\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240pte = pte_move_flags(pte, _PAGE_DIRTY_HW, _PAGE_DIRTY_SW);\n",
-  "> > +\302\240\302\240\302\240\302\240\302\240\302\240\302\240set_pte_at(mm, addr, ptep, pte);\n",
-  "> > \302\240}\n",
+  "> > +A A A A A A A A * that the PTE is not a valid Shadow Stack PTE.\n",
+  "> > +A A A A A A A A */\n",
+  "> > +A A A A A A A pte = pte_move_flags(pte, _PAGE_DIRTY_HW, _PAGE_DIRTY_SW);\n",
+  "> > +A A A A A A A set_pte_at(mm, addr, ptep, pte);\n",
+  "> > A }\n",
   "> I don't understand why it's okay that you first atomically clear the\n",
   "> RW bit, then atomically switch from DIRTY_HW to DIRTY_SW. Doesn't\n",
   "> that\n",
@@ -122,4 +122,4 @@
   "Yu-cheng"
 ]
 
-a2ffb47baf57650a1efc374dd385dddf3ab81b947c20c1d8d753b13928da57d6
+be68650434608853bf90d911a0508312c505bc36a0e9ede3d96a9df0b9bc4ed9

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.