All of lore.kernel.org
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Liam Howlett <liam.howlett@oracle.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Michel Lespinasse <michel@lespinasse.org>,
	Jerome Glisse <jglisse@google.com>,
	Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@suse.de>, Davidlohr Bueso <dave@stgolabs.net>,
	Matthew Wilcox <willy@infradead.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Laurent Dufour <laurent.dufour@fr.ibm.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Song Liu <songliubraving@fb.com>, Peter Xu <peterx@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	"dhowells@redhat.com" <dhowells@redhat.com>,
	Hugh Dickins <hughd@google.com>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Kent Overstreet <kent.overstreet@linux.dev>,
	David Rientjes <rientjes@google.com>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Joel Fernandes <joelaf@google.com>,
	Minchan Kim <minchan@google.com>,
	kernel-team <kernel-team@android.com>,
	linux-mm <linux-mm@kvack.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"x86@kernel.org" <x86@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH RESEND 06/28] mm: mark VMA as locked whenever vma->vm_flags are modified
Date: Tue, 6 Sep 2022 13:13:05 -0700	[thread overview]
Message-ID: <CAJuCfpH5kR2BxEq_bzkAPHwW5dvzTxikCKDD75YK+JaMaHqaJQ@mail.gmail.com> (raw)
In-Reply-To: <20220906195949.7nln7y6urs6rfyyd@revolver>

On Tue, Sep 6, 2022 at 1:00 PM Liam Howlett <liam.howlett@oracle.com> wrote:
>
> * Suren Baghdasaryan <surenb@google.com> [220906 15:01]:
> > On Tue, Sep 6, 2022 at 7:27 AM Laurent Dufour <ldufour@linux.ibm.com> wrote:
> > >
> > > Le 01/09/2022 à 19:34, Suren Baghdasaryan a écrit :
> > > > VMA flag modifications should be done under VMA lock to prevent concurrent
> > > > page fault handling in that area.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > > ---
> > > >  fs/proc/task_mmu.c | 1 +
> > > >  fs/userfaultfd.c   | 6 ++++++
> > > >  mm/madvise.c       | 1 +
> > > >  mm/mlock.c         | 2 ++
> > > >  mm/mmap.c          | 1 +
> > > >  mm/mprotect.c      | 1 +
> > > >  6 files changed, 12 insertions(+)
> > >
> > > There are few changes also done in the driver's space, for instance:
> > >
> > > *** arch/x86/kernel/cpu/sgx/driver.c:
> > > sgx_mmap[98]                   vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND |
> > > VM_DONTDUMP | VM_IO;
> > > *** arch/x86/kernel/cpu/sgx/virt.c:
> > > sgx_vepc_mmap[108]             vma->vm_flags |= VM_PFNMAP | VM_IO |
> > > VM_DONTDUMP | VM_DONTCOPY;
> > > *** drivers/dax/device.c:
> > > dax_mmap[311]                  vma->vm_flags |= VM_HUGEPAGE;
> > >
> > > I guess these changes to vm_flags should be protected as well, or to be
> > > checked one by one.
> >
> > Thanks for noting these! I'll add necessary locking here and will look
> > for other places I might have missed.
>
> Would an inline set/clear bit function be worth while for vm_flags?  If
> it is then a name change to vm_flags may get the compiler to catch any
> missed cases.  There doesn't seem to be many cases (12 inserts) so maybe
> not.

That would probably simplify the maintenance for flags in the future
and we can add vma_mark_locked directly in the set/clear functions.

>
> >
> > >
> > > >
> > > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > > > index 4e0023643f8b..ceffa5c2c650 100644
> > > > --- a/fs/proc/task_mmu.c
> > > > +++ b/fs/proc/task_mmu.c
> > > > @@ -1285,6 +1285,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > > >                       for (vma = mm->mmap; vma; vma = vma->vm_next) {
> > > >                               if (!(vma->vm_flags & VM_SOFTDIRTY))
> > > >                                       continue;
> > > > +                             vma_mark_locked(vma);
> > > >                               vma->vm_flags &= ~VM_SOFTDIRTY;
> > > >                               vma_set_page_prot(vma);
> > > >                       }
> > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > > > index 175de70e3adf..fe557b3d1c07 100644
> > > > --- a/fs/userfaultfd.c
> > > > +++ b/fs/userfaultfd.c
> > > > @@ -620,6 +620,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
> > > >               mmap_write_lock(mm);
> > > >               for (vma = mm->mmap; vma; vma = vma->vm_next)
> > > >                       if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
> > > > +                             vma_mark_locked(vma);
> > > >                               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >                               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >                       }
> > > > @@ -653,6 +654,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
> > > >
> > > >       octx = vma->vm_userfaultfd_ctx.ctx;
> > > >       if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >               return 0;
> > > > @@ -734,6 +736,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
> > > >               atomic_inc(&ctx->mmap_changing);
> > > >       } else {
> > > >               /* Drop uffd context if remap feature not enabled */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >       }
> > > > @@ -891,6 +894,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> > > >                       vma = prev;
> > > >               else
> > > >                       prev = vma;
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >       }
> > > > @@ -1449,6 +1453,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> > > >                * the next vma was merged into the current one and
> > > >                * the current one has not been updated yet.
> > > >                */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx.ctx = ctx;
> > > >
> > > > @@ -1630,6 +1635,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
> > > >                * the next vma was merged into the current one and
> > > >                * the current one has not been updated yet.
> > > >                */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >
> > > > diff --git a/mm/madvise.c b/mm/madvise.c
> > > > index 5f0f0948a50e..a173f0025abd 100644
> > > > --- a/mm/madvise.c
> > > > +++ b/mm/madvise.c
> > > > @@ -181,6 +181,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
> > > >       /*
> > > >        * vm_flags is protected by the mmap_lock held in write mode.
> > > >        */
> > > > +     vma_mark_locked(vma);
> > > >       vma->vm_flags = new_flags;
> > > >       if (!vma->vm_file) {
> > > >               error = replace_anon_vma_name(vma, anon_name);
> > > > diff --git a/mm/mlock.c b/mm/mlock.c
> > > > index b14e929084cc..f62e1a4d05f2 100644
> > > > --- a/mm/mlock.c
> > > > +++ b/mm/mlock.c
> > > > @@ -380,6 +380,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
> > > >        */
> > > >       if (newflags & VM_LOCKED)
> > > >               newflags |= VM_IO;
> > > > +     vma_mark_locked(vma);
> > > >       WRITE_ONCE(vma->vm_flags, newflags);
> > > >
> > > >       lru_add_drain();
> > > > @@ -456,6 +457,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
> > > >
> > > >       if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {
> > > >               /* No work to do, and mlocking twice would be wrong */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = newflags;
> > > >       } else {
> > > >               mlock_vma_pages_range(vma, start, end, newflags);
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index 693e6776be39..f89c9b058105 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -1818,6 +1818,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> > > >  out:
> > > >       perf_event_mmap(vma);
> > > >
> > > > +     vma_mark_locked(vma);
> > > >       vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> > > >       if (vm_flags & VM_LOCKED) {
> > > >               if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> > >
> > > I guess, this doesn't really impact, but the call to vma_mark_locked(vma)
> > > may be done only in the case the vm_flags field is touched.
> > > Something like this:
> > >
> > >         vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> > >         if (vm_flags & VM_LOCKED) {
> > >                 if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> > >                                         is_vm_hugetlb_page(vma) ||
> > > -                                       vma == get_gate_vma(current->mm))
> > > +                                       vma == get_gate_vma(current->mm)) {
> > > +                       vma_mark_locked(vma);
> > >                         vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> > > -               else
> > > +               } else
> > >                         mm->locked_vm += (len >> PAGE_SHIFT);
> > >         }
> > >
> > >
> > > > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > > > index bc6bddd156ca..df47fc21b0e4 100644
> > > > --- a/mm/mprotect.c
> > > > +++ b/mm/mprotect.c
> > > > @@ -621,6 +621,7 @@ mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > > >        * vm_flags and vm_page_prot are protected by the mmap_lock
> > > >        * held in write mode.
> > > >        */
> > > > +     vma_mark_locked(vma);
> > > >       vma->vm_flags = newflags;
> > > >       /*
> > > >        * We want to check manually if we can change individual PTEs writable
> > >
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

WARNING: multiple messages have this Message-ID (diff)
From: Suren Baghdasaryan <surenb@google.com>
To: Liam Howlett <liam.howlett@oracle.com>
Cc: Michel Lespinasse <michel@lespinasse.org>,
	Joel Fernandes <joelaf@google.com>,
	Song Liu <songliubraving@fb.com>, Michal Hocko <mhocko@suse.com>,
	David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Peter Xu <peterx@redhat.com>,
	"dhowells@redhat.com" <dhowells@redhat.com>,
	linux-mm <linux-mm@kvack.org>, Jerome Glisse <jglisse@google.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Minchan Kim <minchan@google.com>,
	"x86@kernel.org" <x86@kernel.org>,
	Hugh Dickins <hughd@google.com>,
	Matthew Wilcox <willy@infradead.org>,
	Laurent Dufour <laurent.dufour@fr.ibm.com>,
	Mel Gorman <mgorman@suse.de>,
	David Rientjes <rientjes@google.com>,
	Axel Rasmussen <axelrasmussen@google.com>,
	kernel-team <kernel-team@android.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Laurent Dufour <ldufour@linux.ibm.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	"linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@
	lists.infradead.org>, Kent Overstreet <kent.overstreet@linux.dev>,
	LKML <linux-kernel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>
Subject: Re: [RFC PATCH RESEND 06/28] mm: mark VMA as locked whenever vma->vm_flags are modified
Date: Tue, 6 Sep 2022 13:13:05 -0700	[thread overview]
Message-ID: <CAJuCfpH5kR2BxEq_bzkAPHwW5dvzTxikCKDD75YK+JaMaHqaJQ@mail.gmail.com> (raw)
In-Reply-To: <20220906195949.7nln7y6urs6rfyyd@revolver>

On Tue, Sep 6, 2022 at 1:00 PM Liam Howlett <liam.howlett@oracle.com> wrote:
>
> * Suren Baghdasaryan <surenb@google.com> [220906 15:01]:
> > On Tue, Sep 6, 2022 at 7:27 AM Laurent Dufour <ldufour@linux.ibm.com> wrote:
> > >
> > > Le 01/09/2022 à 19:34, Suren Baghdasaryan a écrit :
> > > > VMA flag modifications should be done under VMA lock to prevent concurrent
> > > > page fault handling in that area.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > > ---
> > > >  fs/proc/task_mmu.c | 1 +
> > > >  fs/userfaultfd.c   | 6 ++++++
> > > >  mm/madvise.c       | 1 +
> > > >  mm/mlock.c         | 2 ++
> > > >  mm/mmap.c          | 1 +
> > > >  mm/mprotect.c      | 1 +
> > > >  6 files changed, 12 insertions(+)
> > >
> > > There are few changes also done in the driver's space, for instance:
> > >
> > > *** arch/x86/kernel/cpu/sgx/driver.c:
> > > sgx_mmap[98]                   vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND |
> > > VM_DONTDUMP | VM_IO;
> > > *** arch/x86/kernel/cpu/sgx/virt.c:
> > > sgx_vepc_mmap[108]             vma->vm_flags |= VM_PFNMAP | VM_IO |
> > > VM_DONTDUMP | VM_DONTCOPY;
> > > *** drivers/dax/device.c:
> > > dax_mmap[311]                  vma->vm_flags |= VM_HUGEPAGE;
> > >
> > > I guess these changes to vm_flags should be protected as well, or to be
> > > checked one by one.
> >
> > Thanks for noting these! I'll add necessary locking here and will look
> > for other places I might have missed.
>
> Would an inline set/clear bit function be worth while for vm_flags?  If
> it is then a name change to vm_flags may get the compiler to catch any
> missed cases.  There doesn't seem to be many cases (12 inserts) so maybe
> not.

That would probably simplify the maintenance for flags in the future
and we can add vma_mark_locked directly in the set/clear functions.

>
> >
> > >
> > > >
> > > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > > > index 4e0023643f8b..ceffa5c2c650 100644
> > > > --- a/fs/proc/task_mmu.c
> > > > +++ b/fs/proc/task_mmu.c
> > > > @@ -1285,6 +1285,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > > >                       for (vma = mm->mmap; vma; vma = vma->vm_next) {
> > > >                               if (!(vma->vm_flags & VM_SOFTDIRTY))
> > > >                                       continue;
> > > > +                             vma_mark_locked(vma);
> > > >                               vma->vm_flags &= ~VM_SOFTDIRTY;
> > > >                               vma_set_page_prot(vma);
> > > >                       }
> > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > > > index 175de70e3adf..fe557b3d1c07 100644
> > > > --- a/fs/userfaultfd.c
> > > > +++ b/fs/userfaultfd.c
> > > > @@ -620,6 +620,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
> > > >               mmap_write_lock(mm);
> > > >               for (vma = mm->mmap; vma; vma = vma->vm_next)
> > > >                       if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
> > > > +                             vma_mark_locked(vma);
> > > >                               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >                               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >                       }
> > > > @@ -653,6 +654,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
> > > >
> > > >       octx = vma->vm_userfaultfd_ctx.ctx;
> > > >       if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >               return 0;
> > > > @@ -734,6 +736,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
> > > >               atomic_inc(&ctx->mmap_changing);
> > > >       } else {
> > > >               /* Drop uffd context if remap feature not enabled */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >       }
> > > > @@ -891,6 +894,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> > > >                       vma = prev;
> > > >               else
> > > >                       prev = vma;
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >       }
> > > > @@ -1449,6 +1453,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> > > >                * the next vma was merged into the current one and
> > > >                * the current one has not been updated yet.
> > > >                */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx.ctx = ctx;
> > > >
> > > > @@ -1630,6 +1635,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
> > > >                * the next vma was merged into the current one and
> > > >                * the current one has not been updated yet.
> > > >                */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >
> > > > diff --git a/mm/madvise.c b/mm/madvise.c
> > > > index 5f0f0948a50e..a173f0025abd 100644
> > > > --- a/mm/madvise.c
> > > > +++ b/mm/madvise.c
> > > > @@ -181,6 +181,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
> > > >       /*
> > > >        * vm_flags is protected by the mmap_lock held in write mode.
> > > >        */
> > > > +     vma_mark_locked(vma);
> > > >       vma->vm_flags = new_flags;
> > > >       if (!vma->vm_file) {
> > > >               error = replace_anon_vma_name(vma, anon_name);
> > > > diff --git a/mm/mlock.c b/mm/mlock.c
> > > > index b14e929084cc..f62e1a4d05f2 100644
> > > > --- a/mm/mlock.c
> > > > +++ b/mm/mlock.c
> > > > @@ -380,6 +380,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
> > > >        */
> > > >       if (newflags & VM_LOCKED)
> > > >               newflags |= VM_IO;
> > > > +     vma_mark_locked(vma);
> > > >       WRITE_ONCE(vma->vm_flags, newflags);
> > > >
> > > >       lru_add_drain();
> > > > @@ -456,6 +457,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
> > > >
> > > >       if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {
> > > >               /* No work to do, and mlocking twice would be wrong */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = newflags;
> > > >       } else {
> > > >               mlock_vma_pages_range(vma, start, end, newflags);
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index 693e6776be39..f89c9b058105 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -1818,6 +1818,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> > > >  out:
> > > >       perf_event_mmap(vma);
> > > >
> > > > +     vma_mark_locked(vma);
> > > >       vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> > > >       if (vm_flags & VM_LOCKED) {
> > > >               if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> > >
> > > I guess, this doesn't really impact, but the call to vma_mark_locked(vma)
> > > may be done only in the case the vm_flags field is touched.
> > > Something like this:
> > >
> > >         vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> > >         if (vm_flags & VM_LOCKED) {
> > >                 if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> > >                                         is_vm_hugetlb_page(vma) ||
> > > -                                       vma == get_gate_vma(current->mm))
> > > +                                       vma == get_gate_vma(current->mm)) {
> > > +                       vma_mark_locked(vma);
> > >                         vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> > > -               else
> > > +               } else
> > >                         mm->locked_vm += (len >> PAGE_SHIFT);
> > >         }
> > >
> > >
> > > > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > > > index bc6bddd156ca..df47fc21b0e4 100644
> > > > --- a/mm/mprotect.c
> > > > +++ b/mm/mprotect.c
> > > > @@ -621,6 +621,7 @@ mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > > >        * vm_flags and vm_page_prot are protected by the mmap_lock
> > > >        * held in write mode.
> > > >        */
> > > > +     vma_mark_locked(vma);
> > > >       vma->vm_flags = newflags;
> > > >       /*
> > > >        * We want to check manually if we can change individual PTEs writable
> > >
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

WARNING: multiple messages have this Message-ID (diff)
From: Suren Baghdasaryan <surenb@google.com>
To: Liam Howlett <liam.howlett@oracle.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Michel Lespinasse <michel@lespinasse.org>,
	Jerome Glisse <jglisse@google.com>,
	 Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Mel Gorman <mgorman@suse.de>,
	Davidlohr Bueso <dave@stgolabs.net>,
	 Matthew Wilcox <willy@infradead.org>,
	Peter Zijlstra <peterz@infradead.org>,
	 Laurent Dufour <laurent.dufour@fr.ibm.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	 Andy Lutomirski <luto@kernel.org>,
	Song Liu <songliubraving@fb.com>, Peter Xu <peterx@redhat.com>,
	 David Hildenbrand <david@redhat.com>,
	"dhowells@redhat.com" <dhowells@redhat.com>,
	Hugh Dickins <hughd@google.com>,
	 Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Kent Overstreet <kent.overstreet@linux.dev>,
	 David Rientjes <rientjes@google.com>,
	Axel Rasmussen <axelrasmussen@google.com>,
	 Joel Fernandes <joelaf@google.com>,
	Minchan Kim <minchan@google.com>,
	 kernel-team <kernel-team@android.com>,
	linux-mm <linux-mm@kvack.org>,
	 "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	 "linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"x86@kernel.org" <x86@kernel.org>,
	 LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH RESEND 06/28] mm: mark VMA as locked whenever vma->vm_flags are modified
Date: Tue, 6 Sep 2022 13:13:05 -0700	[thread overview]
Message-ID: <CAJuCfpH5kR2BxEq_bzkAPHwW5dvzTxikCKDD75YK+JaMaHqaJQ@mail.gmail.com> (raw)
In-Reply-To: <20220906195949.7nln7y6urs6rfyyd@revolver>

On Tue, Sep 6, 2022 at 1:00 PM Liam Howlett <liam.howlett@oracle.com> wrote:
>
> * Suren Baghdasaryan <surenb@google.com> [220906 15:01]:
> > On Tue, Sep 6, 2022 at 7:27 AM Laurent Dufour <ldufour@linux.ibm.com> wrote:
> > >
> > > Le 01/09/2022 à 19:34, Suren Baghdasaryan a écrit :
> > > > VMA flag modifications should be done under VMA lock to prevent concurrent
> > > > page fault handling in that area.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > > ---
> > > >  fs/proc/task_mmu.c | 1 +
> > > >  fs/userfaultfd.c   | 6 ++++++
> > > >  mm/madvise.c       | 1 +
> > > >  mm/mlock.c         | 2 ++
> > > >  mm/mmap.c          | 1 +
> > > >  mm/mprotect.c      | 1 +
> > > >  6 files changed, 12 insertions(+)
> > >
> > > There are few changes also done in the driver's space, for instance:
> > >
> > > *** arch/x86/kernel/cpu/sgx/driver.c:
> > > sgx_mmap[98]                   vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND |
> > > VM_DONTDUMP | VM_IO;
> > > *** arch/x86/kernel/cpu/sgx/virt.c:
> > > sgx_vepc_mmap[108]             vma->vm_flags |= VM_PFNMAP | VM_IO |
> > > VM_DONTDUMP | VM_DONTCOPY;
> > > *** drivers/dax/device.c:
> > > dax_mmap[311]                  vma->vm_flags |= VM_HUGEPAGE;
> > >
> > > I guess these changes to vm_flags should be protected as well, or to be
> > > checked one by one.
> >
> > Thanks for noting these! I'll add necessary locking here and will look
> > for other places I might have missed.
>
> Would an inline set/clear bit function be worth while for vm_flags?  If
> it is then a name change to vm_flags may get the compiler to catch any
> missed cases.  There doesn't seem to be many cases (12 inserts) so maybe
> not.

That would probably simplify the maintenance for flags in the future
and we can add vma_mark_locked directly in the set/clear functions.

>
> >
> > >
> > > >
> > > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > > > index 4e0023643f8b..ceffa5c2c650 100644
> > > > --- a/fs/proc/task_mmu.c
> > > > +++ b/fs/proc/task_mmu.c
> > > > @@ -1285,6 +1285,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > > >                       for (vma = mm->mmap; vma; vma = vma->vm_next) {
> > > >                               if (!(vma->vm_flags & VM_SOFTDIRTY))
> > > >                                       continue;
> > > > +                             vma_mark_locked(vma);
> > > >                               vma->vm_flags &= ~VM_SOFTDIRTY;
> > > >                               vma_set_page_prot(vma);
> > > >                       }
> > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > > > index 175de70e3adf..fe557b3d1c07 100644
> > > > --- a/fs/userfaultfd.c
> > > > +++ b/fs/userfaultfd.c
> > > > @@ -620,6 +620,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
> > > >               mmap_write_lock(mm);
> > > >               for (vma = mm->mmap; vma; vma = vma->vm_next)
> > > >                       if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
> > > > +                             vma_mark_locked(vma);
> > > >                               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >                               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >                       }
> > > > @@ -653,6 +654,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
> > > >
> > > >       octx = vma->vm_userfaultfd_ctx.ctx;
> > > >       if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >               return 0;
> > > > @@ -734,6 +736,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
> > > >               atomic_inc(&ctx->mmap_changing);
> > > >       } else {
> > > >               /* Drop uffd context if remap feature not enabled */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >               vma->vm_flags &= ~__VM_UFFD_FLAGS;
> > > >       }
> > > > @@ -891,6 +894,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> > > >                       vma = prev;
> > > >               else
> > > >                       prev = vma;
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >       }
> > > > @@ -1449,6 +1453,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> > > >                * the next vma was merged into the current one and
> > > >                * the current one has not been updated yet.
> > > >                */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx.ctx = ctx;
> > > >
> > > > @@ -1630,6 +1635,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
> > > >                * the next vma was merged into the current one and
> > > >                * the current one has not been updated yet.
> > > >                */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = new_flags;
> > > >               vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> > > >
> > > > diff --git a/mm/madvise.c b/mm/madvise.c
> > > > index 5f0f0948a50e..a173f0025abd 100644
> > > > --- a/mm/madvise.c
> > > > +++ b/mm/madvise.c
> > > > @@ -181,6 +181,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
> > > >       /*
> > > >        * vm_flags is protected by the mmap_lock held in write mode.
> > > >        */
> > > > +     vma_mark_locked(vma);
> > > >       vma->vm_flags = new_flags;
> > > >       if (!vma->vm_file) {
> > > >               error = replace_anon_vma_name(vma, anon_name);
> > > > diff --git a/mm/mlock.c b/mm/mlock.c
> > > > index b14e929084cc..f62e1a4d05f2 100644
> > > > --- a/mm/mlock.c
> > > > +++ b/mm/mlock.c
> > > > @@ -380,6 +380,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
> > > >        */
> > > >       if (newflags & VM_LOCKED)
> > > >               newflags |= VM_IO;
> > > > +     vma_mark_locked(vma);
> > > >       WRITE_ONCE(vma->vm_flags, newflags);
> > > >
> > > >       lru_add_drain();
> > > > @@ -456,6 +457,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
> > > >
> > > >       if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {
> > > >               /* No work to do, and mlocking twice would be wrong */
> > > > +             vma_mark_locked(vma);
> > > >               vma->vm_flags = newflags;
> > > >       } else {
> > > >               mlock_vma_pages_range(vma, start, end, newflags);
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index 693e6776be39..f89c9b058105 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -1818,6 +1818,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> > > >  out:
> > > >       perf_event_mmap(vma);
> > > >
> > > > +     vma_mark_locked(vma);
> > > >       vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> > > >       if (vm_flags & VM_LOCKED) {
> > > >               if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> > >
> > > I guess, this doesn't really impact, but the call to vma_mark_locked(vma)
> > > may be done only in the case the vm_flags field is touched.
> > > Something like this:
> > >
> > >         vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> > >         if (vm_flags & VM_LOCKED) {
> > >                 if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> > >                                         is_vm_hugetlb_page(vma) ||
> > > -                                       vma == get_gate_vma(current->mm))
> > > +                                       vma == get_gate_vma(current->mm)) {
> > > +                       vma_mark_locked(vma);
> > >                         vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> > > -               else
> > > +               } else
> > >                         mm->locked_vm += (len >> PAGE_SHIFT);
> > >         }
> > >
> > >
> > > > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > > > index bc6bddd156ca..df47fc21b0e4 100644
> > > > --- a/mm/mprotect.c
> > > > +++ b/mm/mprotect.c
> > > > @@ -621,6 +621,7 @@ mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > > >        * vm_flags and vm_page_prot are protected by the mmap_lock
> > > >        * held in write mode.
> > > >        */
> > > > +     vma_mark_locked(vma);
> > > >       vma->vm_flags = newflags;
> > > >       /*
> > > >        * We want to check manually if we can change individual PTEs writable
> > >
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-09-06 20:17 UTC|newest]

Thread overview: 273+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-01 17:34 [RFC PATCH RESEND 00/28] per-VMA locks proposal Suren Baghdasaryan
2022-09-01 17:34 ` Suren Baghdasaryan
2022-09-01 17:34 ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 01/28] mm: introduce CONFIG_PER_VMA_LOCK Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 02/28] mm: rcu safe VMA freeing Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 03/28] mm: introduce __find_vma to be used without mmap_lock protection Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 20:22   ` Kent Overstreet
2022-09-01 20:22     ` Kent Overstreet
2022-09-01 20:22     ` Kent Overstreet
2022-09-01 23:18     ` Suren Baghdasaryan
2022-09-01 23:18       ` Suren Baghdasaryan
2022-09-01 23:18       ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 04/28] mm: move mmap_lock assert function definitions Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 20:24   ` Kent Overstreet
2022-09-01 20:24     ` Kent Overstreet
2022-09-01 20:24     ` Kent Overstreet
2022-09-01 20:51     ` Liam Howlett
2022-09-01 20:51       ` Liam Howlett
2022-09-01 20:51       ` Liam Howlett
2022-09-01 23:21       ` Suren Baghdasaryan
2022-09-01 23:21         ` Suren Baghdasaryan
2022-09-01 23:21         ` Suren Baghdasaryan
2022-09-02  6:23     ` Sebastian Andrzej Siewior
2022-09-02  6:23       ` Sebastian Andrzej Siewior
2022-09-02  6:23       ` Sebastian Andrzej Siewior
2022-09-02 17:46       ` Suren Baghdasaryan
2022-09-02 17:46         ` Suren Baghdasaryan
2022-09-02 17:46         ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 05/28] mm: add per-VMA lock and helper functions to control it Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 13:46   ` Laurent Dufour
2022-09-06 13:46     ` Laurent Dufour
2022-09-06 13:46     ` Laurent Dufour
2022-09-06 17:24     ` Suren Baghdasaryan
2022-09-06 17:24       ` Suren Baghdasaryan
2022-09-06 17:24       ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 06/28] mm: mark VMA as locked whenever vma->vm_flags are modified Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 14:26   ` Laurent Dufour
2022-09-06 14:26     ` Laurent Dufour
2022-09-06 14:26     ` Laurent Dufour
2022-09-06 19:00     ` Suren Baghdasaryan
2022-09-06 19:00       ` Suren Baghdasaryan
2022-09-06 19:00       ` Suren Baghdasaryan
2022-09-06 20:00       ` Liam Howlett
2022-09-06 20:00         ` Liam Howlett
2022-09-06 20:00         ` Liam Howlett
2022-09-06 20:13         ` Suren Baghdasaryan [this message]
2022-09-06 20:13           ` Suren Baghdasaryan
2022-09-06 20:13           ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 07/28] kernel/fork: mark VMAs as locked before copying pages during fork Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 14:37   ` Laurent Dufour
2022-09-06 14:37     ` Laurent Dufour
2022-09-06 14:37     ` Laurent Dufour
2022-09-08 23:57     ` Suren Baghdasaryan
2022-09-08 23:57       ` Suren Baghdasaryan
2022-09-08 23:57       ` Suren Baghdasaryan
2022-09-09 13:27       ` Laurent Dufour
2022-09-09 13:27         ` Laurent Dufour
2022-09-09 13:27         ` Laurent Dufour
2022-09-09 16:29         ` Suren Baghdasaryan
2022-09-09 16:29           ` Suren Baghdasaryan
2022-09-09 16:29           ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 08/28] mm/khugepaged: mark VMA as locked while collapsing a hugepage Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 14:43   ` Laurent Dufour
2022-09-06 14:43     ` Laurent Dufour
2022-09-06 14:43     ` Laurent Dufour
2022-09-09  0:15     ` Suren Baghdasaryan
2022-09-09  0:15       ` Suren Baghdasaryan
2022-09-09  0:15       ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 09/28] mm/mempolicy: mark VMA as locked when changing protection policy Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 14:47   ` Laurent Dufour
2022-09-06 14:47     ` Laurent Dufour
2022-09-06 14:47     ` Laurent Dufour
2022-09-09  0:27     ` Suren Baghdasaryan
2022-09-09  0:27       ` Suren Baghdasaryan
2022-09-09  0:27       ` Suren Baghdasaryan
2022-09-01 17:34 ` [RFC PATCH RESEND 10/28] mm/mmap: mark VMAs as locked in vma_adjust Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 15:35   ` Laurent Dufour
2022-09-06 15:35     ` Laurent Dufour
2022-09-06 15:35     ` Laurent Dufour
2022-09-09  0:51     ` Suren Baghdasaryan
2022-09-09  0:51       ` Suren Baghdasaryan
2022-09-09  0:51       ` Suren Baghdasaryan
2022-09-09 15:52       ` Laurent Dufour
2022-09-09 15:52         ` Laurent Dufour
2022-09-09 15:52         ` Laurent Dufour
2022-09-01 17:34 ` [RFC PATCH RESEND 11/28] mm/mmap: mark VMAs as locked before merging or splitting them Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-01 17:34   ` Suren Baghdasaryan
2022-09-06 15:44   ` Laurent Dufour
2022-09-06 15:44     ` Laurent Dufour
2022-09-06 15:44     ` Laurent Dufour
2022-09-01 17:35 ` [RFC PATCH RESEND 12/28] mm/mremap: mark VMA as locked while remapping it to a new address range Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-06 16:09   ` Laurent Dufour
2022-09-06 16:09     ` Laurent Dufour
2022-09-06 16:09     ` Laurent Dufour
2022-09-01 17:35 ` [RFC PATCH RESEND 13/28] mm: conditionally mark VMA as locked in free_pgtables and unmap_page_range Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 10:33   ` Laurent Dufour
2022-09-09 10:33     ` Laurent Dufour
2022-09-09 10:33     ` Laurent Dufour
2022-09-09 16:43     ` Suren Baghdasaryan
2022-09-09 16:43       ` Suren Baghdasaryan
2022-09-09 16:43       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 14/28] mm: mark VMAs as locked before isolating them Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 13:35   ` Laurent Dufour
2022-09-09 13:35     ` Laurent Dufour
2022-09-09 13:35     ` Laurent Dufour
2022-09-09 16:28     ` Suren Baghdasaryan
2022-09-09 16:28       ` Suren Baghdasaryan
2022-09-09 16:28       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 15/28] mm/mmap: mark adjacent VMAs as locked if they can grow into unmapped area Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 13:43   ` Laurent Dufour
2022-09-09 13:43     ` Laurent Dufour
2022-09-09 13:43     ` Laurent Dufour
2022-09-09 16:25     ` Suren Baghdasaryan
2022-09-09 16:25       ` Suren Baghdasaryan
2022-09-09 16:25       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 16/28] kernel/fork: assert no VMA readers during its destruction Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 13:56   ` Laurent Dufour
2022-09-09 13:56     ` Laurent Dufour
2022-09-09 13:56     ` Laurent Dufour
2022-09-09 16:19     ` Suren Baghdasaryan
2022-09-09 16:19       ` Suren Baghdasaryan
2022-09-09 16:19       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 17/28] mm/mmap: prevent pagefault handler from racing with mmu_notifier registration Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 14:20   ` Laurent Dufour
2022-09-09 14:20     ` Laurent Dufour
2022-09-09 14:20     ` Laurent Dufour
2022-09-09 16:12     ` Suren Baghdasaryan
2022-09-09 16:12       ` Suren Baghdasaryan
2022-09-09 16:12       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 18/28] mm: add FAULT_FLAG_VMA_LOCK flag Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 14:26   ` Laurent Dufour
2022-09-09 14:26     ` Laurent Dufour
2022-09-09 14:26     ` Laurent Dufour
2022-09-01 17:35 ` [RFC PATCH RESEND 19/28] mm: disallow do_swap_page to handle page faults under VMA lock Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-06 19:39   ` Peter Xu
2022-09-06 19:39     ` Peter Xu
2022-09-06 19:39     ` Peter Xu
2022-09-06 20:08     ` Suren Baghdasaryan
2022-09-06 20:08       ` Suren Baghdasaryan
2022-09-06 20:08       ` Suren Baghdasaryan
2022-09-06 20:22       ` Peter Xu
2022-09-06 20:22         ` Peter Xu
2022-09-06 20:22         ` Peter Xu
2022-09-07  0:58         ` Suren Baghdasaryan
2022-09-07  0:58           ` Suren Baghdasaryan
2022-09-07  0:58           ` Suren Baghdasaryan
2022-09-09 14:26   ` Laurent Dufour
2022-09-09 14:26     ` Laurent Dufour
2022-09-09 14:26     ` Laurent Dufour
2022-09-01 17:35 ` [RFC PATCH RESEND 20/28] mm: introduce per-VMA lock statistics Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 14:28   ` Laurent Dufour
2022-09-09 14:28     ` Laurent Dufour
2022-09-09 14:28     ` Laurent Dufour
2022-09-09 16:11     ` Suren Baghdasaryan
2022-09-09 16:11       ` Suren Baghdasaryan
2022-09-09 16:11       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 21/28] mm: introduce find_and_lock_anon_vma to be used from arch-specific code Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 14:38   ` Laurent Dufour
2022-09-09 14:38     ` Laurent Dufour
2022-09-09 14:38     ` Laurent Dufour
2022-09-09 16:10     ` Suren Baghdasaryan
2022-09-09 16:10       ` Suren Baghdasaryan
2022-09-09 16:10       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 22/28] x86/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 23/28] x86/mm: define ARCH_SUPPORTS_PER_VMA_LOCK Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 20:20   ` Kent Overstreet
2022-09-01 20:20     ` Kent Overstreet
2022-09-01 20:20     ` Kent Overstreet
2022-09-01 23:17     ` Suren Baghdasaryan
2022-09-01 23:17       ` Suren Baghdasaryan
2022-09-01 23:17       ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 24/28] arm64/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 25/28] arm64/mm: define ARCH_SUPPORTS_PER_VMA_LOCK Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 26/28] powerc/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 27/28] powerpc/mm: define ARCH_SUPPORTS_PER_VMA_LOCK Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35 ` [RFC PATCH RESEND 28/28] kernel/fork: throttle call_rcu() calls in vm_area_free Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-01 17:35   ` Suren Baghdasaryan
2022-09-09 15:19   ` Laurent Dufour
2022-09-09 15:19     ` Laurent Dufour
2022-09-09 15:19     ` Laurent Dufour
2022-09-09 16:02     ` Suren Baghdasaryan
2022-09-09 16:02       ` Suren Baghdasaryan
2022-09-09 16:02       ` Suren Baghdasaryan
2022-09-09 16:14       ` Laurent Dufour
2022-09-09 16:14         ` Laurent Dufour
2022-09-09 16:14         ` Laurent Dufour
2022-09-01 20:58 ` [RFC PATCH RESEND 00/28] per-VMA locks proposal Kent Overstreet
2022-09-01 20:58   ` Kent Overstreet
2022-09-01 20:58   ` Kent Overstreet
2022-09-01 23:26   ` Suren Baghdasaryan
2022-09-01 23:26     ` Suren Baghdasaryan
2022-09-01 23:26     ` Suren Baghdasaryan
2022-09-11  9:35     ` Vlastimil Babka
2022-09-11  9:35       ` Vlastimil Babka
2022-09-11  9:35       ` Vlastimil Babka
2022-09-28  2:28       ` Suren Baghdasaryan
2022-09-28  2:28         ` Suren Baghdasaryan
2022-09-28  2:28         ` Suren Baghdasaryan
2022-09-29 11:18         ` Vlastimil Babka
2022-09-29 11:18           ` Vlastimil Babka
2022-09-29 11:18           ` Vlastimil Babka
2022-09-02  7:42 ` Peter Zijlstra
2022-09-02  7:42   ` Peter Zijlstra
2022-09-02  7:42   ` Peter Zijlstra
2022-09-02 14:45   ` Suren Baghdasaryan
2022-09-02 14:45     ` Suren Baghdasaryan
2022-09-02 14:45     ` Suren Baghdasaryan
2022-09-05 12:32 ` Michal Hocko
2022-09-05 12:32   ` Michal Hocko
2022-09-05 12:32   ` Michal Hocko
2022-09-05 18:32   ` Suren Baghdasaryan
2022-09-05 18:32     ` Suren Baghdasaryan
2022-09-05 18:32     ` Suren Baghdasaryan
2022-09-05 20:35     ` Kent Overstreet
2022-09-05 20:35       ` Kent Overstreet
2022-09-05 20:35       ` Kent Overstreet
2022-09-06 15:46       ` Suren Baghdasaryan
2022-09-06 15:46         ` Suren Baghdasaryan
2022-09-06 15:46         ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpH5kR2BxEq_bzkAPHwW5dvzTxikCKDD75YK+JaMaHqaJQ@mail.gmail.com \
    --to=surenb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=bigeasy@linutronix.de \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jglisse@google.com \
    --cc=joelaf@google.com \
    --cc=kent.overstreet@linux.dev \
    --cc=kernel-team@android.com \
    --cc=laurent.dufour@fr.ibm.com \
    --cc=ldufour@linux.ibm.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=michel@lespinasse.org \
    --cc=minchan@google.com \
    --cc=paulmck@kernel.org \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=songliubraving@fb.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.