linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
To: Lokesh Gidra <lokeshgidra@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>,
	akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	selinux@vger.kernel.org, kernel-team@android.com,
	aarcange@redhat.com, peterx@redhat.com, david@redhat.com,
	axelrasmussen@google.com, bgeffon@google.com,
	willy@infradead.org, jannh@google.com, kaleshsingh@google.com,
	ngeoffray@google.com, timmurray@google.com, rppt@kernel.org
Subject: Re: [PATCH v2 3/3] userfaultfd: use per-vma locks in userfaultfd operations
Date: Mon, 29 Jan 2024 21:58:03 -0500	[thread overview]
Message-ID: <20240130025803.2go3xekza5qubxgz@revolver> (raw)
In-Reply-To: <CA+EESO5r+b7QPYM5po--rxQBa9EPi4x1EZ96rEzso288dbpuow@mail.gmail.com>

* Lokesh Gidra <lokeshgidra@google.com> [240129 19:28]:
> On Mon, Jan 29, 2024 at 12:53 PM Suren Baghdasaryan <surenb@google.com> wrote:
> >

...

> 
> Thanks for informing. So vma_lookup() returns the vma for any address
> within [vma->vm_start, vma->vm_end)?

No.  It returns the vma that contains the address passed.  If there
isn't one, you will get NULL.  This is why the range check is not
needed.

find_vma() walks to the address passed and if it is NULL, it returns a
vma that has a higher start address than the one passed (or, rarely NULL
if it runs off the edge).

> > > If you want to search upwards from dst_start for a VMA then you should
> > > move the range check below into this brace.
> > >
> > > > +     }
> > > > +
> > > >       /*
> > > >        * Make sure that the dst range is both valid and fully within a
> > > >        * single existing vma.
> > > >        */
> > > > -     struct vm_area_struct *dst_vma;
> > > > -
> > > > -     dst_vma = find_vma(dst_mm, dst_start);
> > > >       if (!range_in_vma(dst_vma, dst_start, dst_start + len))
> > > > -             return NULL;
> > > > +             goto unpin;
> > > >
> > > >       /*
> > > >        * Check the vma is registered in uffd, this is required to
> > > > @@ -40,9 +59,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm,
> > > >        * time.
> > > >        */
> > > >       if (!dst_vma->vm_userfaultfd_ctx.ctx)
> > > > -             return NULL;
> > > > +             goto unpin;
> > > >
> > > >       return dst_vma;
> > > > +
> > > > +unpin:
> > > > +     unpin_vma(dst_mm, dst_vma, mmap_locked);
> > > > +     return NULL;
> > > >  }
> > > >
> > > >  /* Check if dst_addr is outside of file's size. Must be called with ptl held. */
> > > > @@ -350,7 +373,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address)
> > > >  #ifdef CONFIG_HUGETLB_PAGE
> > > >  /*
> > > >   * mfill_atomic processing for HUGETLB vmas.  Note that this routine is
> > > > - * called with mmap_lock held, it will release mmap_lock before returning.
> > > > + * called with either vma-lock or mmap_lock held, it will release the lock
> > > > + * before returning.
> > > >   */
> > > >  static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >                                             struct userfaultfd_ctx *ctx,
> > > > @@ -358,7 +382,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >                                             unsigned long dst_start,
> > > >                                             unsigned long src_start,
> > > >                                             unsigned long len,
> > > > -                                           uffd_flags_t flags)
> > > > +                                           uffd_flags_t flags,
> > > > +                                           bool *mmap_locked)
> > > >  {
> > > >       struct mm_struct *dst_mm = dst_vma->vm_mm;
> > > >       int vm_shared = dst_vma->vm_flags & VM_SHARED;
> > > > @@ -380,7 +405,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >        */
> > > >       if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) {
> > > >               up_read(&ctx->map_changing_lock);
> > > > -             mmap_read_unlock(dst_mm);
> > > > +             unpin_vma(dst_mm, dst_vma, mmap_locked);
> > > >               return -EINVAL;
> > > >       }
> > > >
> > > > @@ -404,12 +429,25 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >        */
> > > >       if (!dst_vma) {
> > > >               err = -ENOENT;
> > > > -             dst_vma = find_dst_vma(dst_mm, dst_start, len);
> > > > -             if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
> > > > -                     goto out_unlock;
> > > > +             dst_vma = find_and_pin_dst_vma(dst_mm, dst_start,
> > > > +                                            len, mmap_locked);
> > > > +             if (!dst_vma)
> > > > +                     goto out;
> > > > +             if (!is_vm_hugetlb_page(dst_vma))
> > > > +                     goto out_unlock_vma;
> > > >
> > > >               err = -EINVAL;
> > > >               if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
> > > > +                     goto out_unlock_vma;
> > > > +
> > > > +             /*
> > > > +              * If memory mappings are changing because of non-cooperative
> > > > +              * operation (e.g. mremap) running in parallel, bail out and
> > > > +              * request the user to retry later
> > > > +              */
> > > > +             down_read(&ctx->map_changing_lock);
> > > > +             err = -EAGAIN;
> > > > +             if (atomic_read(&ctx->mmap_changing))
> > > >                       goto out_unlock;
> > > >
> > > >               vm_shared = dst_vma->vm_flags & VM_SHARED;
> > > > @@ -465,7 +503,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >
> > > >               if (unlikely(err == -ENOENT)) {
> > > >                       up_read(&ctx->map_changing_lock);
> > > > -                     mmap_read_unlock(dst_mm);
> > > > +                     unpin_vma(dst_mm, dst_vma, mmap_locked);
> > > >                       BUG_ON(!folio);
> > > >
> > > >                       err = copy_folio_from_user(folio,
> > > > @@ -474,17 +512,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >                               err = -EFAULT;
> > > >                               goto out;
> > > >                       }
> > > > -                     mmap_read_lock(dst_mm);
> > > > -                     down_read(&ctx->map_changing_lock);
> > > > -                     /*
> > > > -                      * If memory mappings are changing because of non-cooperative
> > > > -                      * operation (e.g. mremap) running in parallel, bail out and
> > > > -                      * request the user to retry later
> > > > -                      */
> > > > -                     if (atomic_read(ctx->mmap_changing)) {
> > > > -                             err = -EAGAIN;
> > > > -                             break;
> > > > -                     }
> > >
> > > ... Okay, this is where things get confusing.
> > >
> > > How about this: Don't do this locking/boolean dance.
> > >
> > > Instead, do something like this:
> > > In mm/memory.c, below lock_vma_under_rcu(), but something like this
> > >
> > > struct vm_area_struct *lock_vma(struct mm_struct *mm,
> > >         unsigned long addr))    /* or some better name.. */
> > > {
> > >         struct vm_area_struct *vma;
> > >
> > >         vma = lock_vma_under_rcu(mm, addr);
> > >
> > >         if (vma)
> > >                 return vma;
> > >
> > >         mmap_read_lock(mm);
> > >         vma = lookup_vma(mm, addr);
> > >         if (vma)
> > >                 vma_start_read(vma); /* Won't fail */
> >
> > Please don't assume vma_start_read() won't fail even when you have
> > mmap_read_lock(). See the comment in vma_start_read() about the
> > possibility of an overflow producing false negatives.
> >
> > >
> > >         mmap_read_unlock(mm);
> > >         return vma;
> > > }
> > >
> > > Now, we know we have a vma that's vma locked if there is a vma.  The vma
> > > won't go away - you have it locked.  The mmap lock is held for even
> > > less time for your worse case, and the code gets easier to follow.
> 
> Your suggestion is definitely simpler and easier to follow, but due to
> the overflow situation that Suren pointed out, I would still need to
> keep the locking/boolean dance, no? IIUC, even if I were to return
> EAGAIN to the userspace, there is no guarantee that subsequent ioctls
> on the same vma will succeed due to the same overflow, until someone
> acquires and releases mmap_lock in write-mode.
> Also, sometimes it seems insufficient whether we managed to lock vma
> or not. For instance, lock_vma_under_rcu() checks if anon_vma (for
> anonymous vma) exists. If not then it bails out.
> So it seems to me that we have to provide some fall back in
> userfaultfd operations which executes with mmap_lock in read-mode.

Fair enough, what if we didn't use the sequence number and just locked
the vma directly?

/* This will wait on the vma lock, so once we return it's locked */
void vma_aquire_read_lock(struct vm_area_struct *vma)
{
	mmap_assert_locked(vma->vm_mm);
	down_read(&vma->vm_lock->lock);
}

struct vm_area_struct *lock_vma(struct mm_struct *mm,
        unsigned long addr))    /* or some better name.. */
{
        struct vm_area_struct *vma;

        vma = lock_vma_under_rcu(mm, addr);
        if (vma)
                return vma;

        mmap_read_lock(mm);
	/* mm sequence cannot change, no mm writers anyways.
	 * find_mergeable_anon_vma is only a concern in the page fault
	 * path
	 * start/end won't change under the mmap_lock
	 * vma won't become detached as we have the mmap_lock in read
	 * We are now sure no writes will change the VMA
	 * So let's make sure no other context is isolating the vma
	 */
        vma = lookup_vma(mm, addr);
        if (vma)
                vma_aquire_read_lock(vma);

        mmap_read_unlock(mm);
        return vma;
}

I'm betting that avoiding the mmap_lock most of the time is good, but
then holding it just to lock the vma will have extremely rare collisions
- and they will be short lived.

This would allow us to simplify your code.

> > >
> > > Once you are done with the vma do a vma_end_read(vma).  Don't forget to
> > > do this!
> > >
> > > Now the comment above such a function should state that the vma needs to
> > > be vma_end_read(vma), or that could go undetected..  It might be worth
> > > adding a unlock_vma() counterpart to vma_end_read(vma) even.
> >
> > Locking VMA while holding mmap_read_lock is an interesting usage
> > pattern I haven't seen yet. I think this should work quite well!
> >
> > >
> > >
> > > >
> > > >                       dst_vma = NULL;
> > > >                       goto retry;
> > > > @@ -505,7 +532,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> > > >
> > > >  out_unlock:
> > > >       up_read(&ctx->map_changing_lock);
> > > > -     mmap_read_unlock(dst_mm);
> > > > +out_unlock_vma:
> > > > +     unpin_vma(dst_mm, dst_vma, mmap_locked);
> > > >  out:
> > > >       if (folio)
> > > >               folio_put(folio);
> > > > @@ -521,7 +549,8 @@ extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx,
> > > >                                   unsigned long dst_start,
> > > >                                   unsigned long src_start,
> > > >                                   unsigned long len,
> > > > -                                 uffd_flags_t flags);
> > > > +                                 uffd_flags_t flags,
> > > > +                                 bool *mmap_locked);
> > >
> > > Just a thought, tabbing in twice for each argument would make this more
> > > compact.
> > >
> > >
> > > >  #endif /* CONFIG_HUGETLB_PAGE */
> > > >
> > > >  static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd,
> > > > @@ -581,6 +610,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > > >       unsigned long src_addr, dst_addr;
> > > >       long copied;
> > > >       struct folio *folio;
> > > > +     bool mmap_locked = false;
> > > >
> > > >       /*
> > > >        * Sanitize the command parameters:
> > > > @@ -597,7 +627,14 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > > >       copied = 0;
> > > >       folio = NULL;
> > > >  retry:
> > > > -     mmap_read_lock(dst_mm);
> > > > +     /*
> > > > +      * Make sure the vma is not shared, that the dst range is
> > > > +      * both valid and fully within a single existing vma.
> > > > +      */
> > > > +     err = -ENOENT;
> > > > +     dst_vma = find_and_pin_dst_vma(dst_mm, dst_start, len, &mmap_locked);
> > > > +     if (!dst_vma)
> > > > +             goto out;
> > > >
> > > >       /*
> > > >        * If memory mappings are changing because of non-cooperative
> > > > @@ -609,15 +646,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > > >       if (atomic_read(&ctx->mmap_changing))
> > > >               goto out_unlock;
> > > >
> > > > -     /*
> > > > -      * Make sure the vma is not shared, that the dst range is
> > > > -      * both valid and fully within a single existing vma.
> > > > -      */
> > > > -     err = -ENOENT;
> > > > -     dst_vma = find_dst_vma(dst_mm, dst_start, len);
> > > > -     if (!dst_vma)
> > > > -             goto out_unlock;
> > > > -
> > > >       err = -EINVAL;
> > > >       /*
> > > >        * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but
> > > > @@ -638,8 +666,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > > >        * If this is a HUGETLB vma, pass off to appropriate routine
> > > >        */
> > > >       if (is_vm_hugetlb_page(dst_vma))
> > > > -             return  mfill_atomic_hugetlb(ctx, dst_vma, dst_start,
> > > > -                                          src_start, len, flags);
> > > > +             return  mfill_atomic_hugetlb(ctx, dst_vma, dst_start, src_start
> > > > +                                          len, flags, &mmap_locked);
> > > >
> > > >       if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
> > > >               goto out_unlock;
> > > > @@ -699,7 +727,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > > >                       void *kaddr;
> > > >
> > > >                       up_read(&ctx->map_changing_lock);
> > > > -                     mmap_read_unlock(dst_mm);
> > > > +                     unpin_vma(dst_mm, dst_vma, &mmap_locked);
> > > > +
> > > >                       BUG_ON(!folio);
> > > >
> > > >                       kaddr = kmap_local_folio(folio, 0);
> > > > @@ -730,7 +759,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > > >
> > > >  out_unlock:
> > > >       up_read(&ctx->map_changing_lock);
> > > > -     mmap_read_unlock(dst_mm);
> > > > +     unpin_vma(dst_mm, dst_vma, &mmap_locked);
> > > >  out:
> > > >       if (folio)
> > > >               folio_put(folio);
> > > > @@ -1285,8 +1314,6 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx,
> > > >   * @len: length of the virtual memory range
> > > >   * @mode: flags from uffdio_move.mode
> > > >   *
> > > > - * Must be called with mmap_lock held for read.
> > > > - *
> > > >   * move_pages() remaps arbitrary anonymous pages atomically in zero
> > > >   * copy. It only works on non shared anonymous pages because those can
> > > >   * be relocated without generating non linear anon_vmas in the rmap
> > > > @@ -1353,15 +1380,16 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx,
> > > >   * could be obtained. This is the only additional complexity added to
> > > >   * the rmap code to provide this anonymous page remapping functionality.
> > > >   */
> > > > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm,
> > > > -                unsigned long dst_start, unsigned long src_start,
> > > > -                unsigned long len, __u64 mode)
> > > > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
> > > > +                unsigned long src_start, unsigned long len, __u64 mode)
> > > >  {
> > > > +     struct mm_struct *mm = ctx->mm;
> > > >       struct vm_area_struct *src_vma, *dst_vma;
> > > >       unsigned long src_addr, dst_addr;
> > > >       pmd_t *src_pmd, *dst_pmd;
> > > >       long err = -EINVAL;
> > > >       ssize_t moved = 0;
> > > > +     bool mmap_locked = false;
> > > >
> > > >       /* Sanitize the command parameters. */
> > > >       if (WARN_ON_ONCE(src_start & ~PAGE_MASK) ||
> > > > @@ -1374,28 +1402,52 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm,
> > > >           WARN_ON_ONCE(dst_start + len <= dst_start))
> > > >               goto out;
> > >
> > > Ah, is this safe for rmap?  I think you need to leave this read lock.
> > >
> I didn't fully understand you here.

Sorry, I'm confused on how your locking scheme avoids rmap from trying
to use the VMA with the atomic increment part.

> > > >
> > > > +     dst_vma = NULL;
> > > > +     src_vma = lock_vma_under_rcu(mm, src_start);
> > > > +     if (src_vma) {
> > > > +             dst_vma = lock_vma_under_rcu(mm, dst_start);
> > > > +             if (!dst_vma)
> > > > +                     vma_end_read(src_vma);
> > > > +     }
> > > > +
> > > > +     /* If we failed to lock both VMAs, fall back to mmap_lock */
> > > > +     if (!dst_vma) {
> > > > +             mmap_read_lock(mm);
> > > > +             mmap_locked = true;
> > > > +             src_vma = find_vma(mm, src_start);
> > > > +             if (!src_vma)
> > > > +                     goto out_unlock_mmap;
> > > > +             dst_vma = find_vma(mm, dst_start);
> > >
> > > Again, there is a difference in how find_vma and lock_vam_under_rcu
> > > works.
> 
> Sure, I'll use vma_lookup() instead of find_vma().

Be sure it fits with what you are doing, I'm not entire sure it's right
to switch.  If it is not right then I don't think you can use
lock_vma_under_rcu() - but we can work around that too.

> > >
> > > > +             if (!dst_vma)
> > > > +                     goto out_unlock_mmap;
> > > > +     }
> > > > +
> > > > +     /* Re-check after taking map_changing_lock */
> > > > +     down_read(&ctx->map_changing_lock);
> > > > +     if (likely(atomic_read(&ctx->mmap_changing))) {
> > > > +             err = -EAGAIN;
> > > > +             goto out_unlock;
> > > > +     }
> > > >       /*
> > > >        * Make sure the vma is not shared, that the src and dst remap
> > > >        * ranges are both valid and fully within a single existing
> > > >        * vma.
> > > >        */
> > > > -     src_vma = find_vma(mm, src_start);
> > > > -     if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> > > > -             goto out;
> > > > +     if (src_vma->vm_flags & VM_SHARED)
> > > > +             goto out_unlock;
> > > >       if (src_start < src_vma->vm_start ||
> > > >           src_start + len > src_vma->vm_end)
> > > > -             goto out;
> > > > +             goto out_unlock;
> > > >
> > > > -     dst_vma = find_vma(mm, dst_start);
> > > > -     if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > > > -             goto out;
> > > > +     if (dst_vma->vm_flags & VM_SHARED)
> > > > +             goto out_unlock;
> > > >       if (dst_start < dst_vma->vm_start ||
> > > >           dst_start + len > dst_vma->vm_end)
> > > > -             goto out;
> > > > +             goto out_unlock;
> > > >
> > > >       err = validate_move_areas(ctx, src_vma, dst_vma);
> > > >       if (err)
> > > > -             goto out;
> > > > +             goto out_unlock;
> > > >
> > > >       for (src_addr = src_start, dst_addr = dst_start;
> > > >            src_addr < src_start + len;) {
> > > > @@ -1512,6 +1564,15 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm,
> > > >               moved += step_size;
> > > >       }
> > > >
> > > > +out_unlock:
> > > > +     up_read(&ctx->map_changing_lock);
> > > > +out_unlock_mmap:
> > > > +     if (mmap_locked)
> > > > +             mmap_read_unlock(mm);
> > > > +     else {
> > > > +             vma_end_read(dst_vma);
> > > > +             vma_end_read(src_vma);
> > > > +     }
> > > >  out:
> > > >       VM_WARN_ON(moved < 0);
> > > >       VM_WARN_ON(err > 0);
> > > > --
> > > > 2.43.0.429.g432eaa2c6b-goog
> > > >
> > > >

  reply	other threads:[~2024-01-30  2:58 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-29 19:35 [PATCH v2 0/3] per-vma locks in userfaultfd Lokesh Gidra
2024-01-29 19:35 ` [PATCH v2 1/3] userfaultfd: move userfaultfd_ctx struct to header file Lokesh Gidra
2024-01-30  7:12   ` Mike Rapoport
2024-01-29 19:35 ` [PATCH v2 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx Lokesh Gidra
2024-01-29 21:00   ` Liam R. Howlett
2024-01-29 22:35     ` Lokesh Gidra
2024-01-30  3:46       ` Liam R. Howlett
2024-01-30  8:55         ` Mike Rapoport
2024-01-30 17:28           ` Liam R. Howlett
2024-01-31  2:24             ` Lokesh Gidra
2024-02-04 10:27               ` Mike Rapoport
2024-02-05 20:53                 ` Lokesh Gidra
2024-02-07 15:27                   ` Mike Rapoport
2024-02-07 20:24                     ` Lokesh Gidra
2024-02-12  8:14                       ` Mike Rapoport
2024-01-30  7:21   ` Mike Rapoport
2024-01-29 19:35 ` [PATCH v2 3/3] userfaultfd: use per-vma locks in userfaultfd operations Lokesh Gidra
2024-01-29 20:36   ` Liam R. Howlett
2024-01-29 20:52     ` Suren Baghdasaryan
2024-01-29 21:18       ` Liam R. Howlett
2024-01-30  0:28       ` Lokesh Gidra
2024-01-30  2:58         ` Liam R. Howlett [this message]
2024-01-31  2:49           ` Lokesh Gidra
2024-01-31 21:41             ` Liam R. Howlett
2024-02-05 21:46               ` Suren Baghdasaryan
2024-02-05 21:54                 ` Lokesh Gidra
2024-02-05 22:00                   ` Liam R. Howlett
2024-02-05 22:24                     ` Lokesh Gidra
2024-02-06 14:35                       ` Liam R. Howlett
2024-02-06 16:26                         ` Lokesh Gidra
2024-02-06 17:07                           ` Liam R. Howlett
2024-01-31  3:03           ` Suren Baghdasaryan
2024-01-31 21:43             ` Liam R. Howlett
2024-01-29 20:39 ` [PATCH v2 0/3] per-vma locks in userfaultfd Liam R. Howlett
2024-01-29 21:58   ` Lokesh Gidra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240130025803.2go3xekza5qubxgz@revolver \
    --to=liam.howlett@oracle.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=bgeffon@google.com \
    --cc=david@redhat.com \
    --cc=jannh@google.com \
    --cc=kaleshsingh@google.com \
    --cc=kernel-team@android.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lokeshgidra@google.com \
    --cc=ngeoffray@google.com \
    --cc=peterx@redhat.com \
    --cc=rppt@kernel.org \
    --cc=selinux@vger.kernel.org \
    --cc=surenb@google.com \
    --cc=timmurray@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).