All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kalesh Singh <kaleshsingh@google.com>
To: Yosry Ahmed <yosryahmed@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Yang Shi <shy828301@gmail.com>,
	 lsf-pc@lists.linux-foundation.org, Linux-MM <linux-mm@kvack.org>,
	 Michal Hocko <mhocko@kernel.org>,
	Shakeel Butt <shakeelb@google.com>,
	 David Rientjes <rientjes@google.com>,
	Hugh Dickins <hughd@google.com>,
	 Seth Jennings <sjenning@redhat.com>,
	Dan Streetman <ddstreet@ieee.org>,
	 Vitaly Wool <vitaly.wool@konsulko.com>,
	Peter Xu <peterx@redhat.com>,  Minchan Kim <minchan@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Nhat Pham <nphamcs@gmail.com>,
	Akilesh Kailash <akailash@google.com>
Subject: Re: [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap
Date: Mon, 27 Feb 2023 20:29:00 -0800	[thread overview]
Message-ID: <CAC_TJve7e=sz4uPDuRvauj1hr=evOUWbSoz91wniSQYUbv0ajA@mail.gmail.com> (raw)
In-Reply-To: <CAJD7tka3MgUpyG4zfcKjtA-P=Wt0Qog=AdJ5zPx0pGwN2a8dbQ@mail.gmail.com>

On Wed, Feb 22, 2023 at 2:47 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Wed, Feb 22, 2023 at 8:57 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > Hello,
> >
> > thanks for proposing this, Yosry. I'm very interested in this
> > work. Unfortunately, I won't be able to attend LSFMMBPF myself this
> > time around due to a scheduling conflict :(
>
> Ugh, would have been great to have you, I guess there might be a
> remote option, or we will end up discussing on the mailing list
> eventually anyway.
>
> >
> > On Tue, Feb 21, 2023 at 03:38:57PM -0800, Yosry Ahmed wrote:
> > > On Tue, Feb 21, 2023 at 3:34 PM Yang Shi <shy828301@gmail.com> wrote:
> > > >
> > > > On Tue, Feb 21, 2023 at 11:46 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> > > > >
> > > > > On Tue, Feb 21, 2023 at 11:26 AM Yang Shi <shy828301@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, Feb 21, 2023 at 10:56 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> > > > > > >
> > > > > > > On Tue, Feb 21, 2023 at 10:40 AM Yang Shi <shy828301@gmail.com> wrote:
> > > > > > > >
> > > > > > > > Hi Yosry,
> > > > > > > >
> > > > > > > > Thanks for proposing this topic. I was thinking about this before but
> > > > > > > > I didn't make too much progress due to some other distractions, and I
> > > > > > > > got a couple of follow up questions about your design. Please see the
> > > > > > > > inline comments below.
> > > > > > >
> > > > > > > Great to see interested folks, thanks!
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Sat, Feb 18, 2023 at 2:39 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> > > > > > > > >
> > > > > > > > > Hello everyone,
> > > > > > > > >
> > > > > > > > > I would like to propose a topic for the upcoming LSF/MM/BPF in May
> > > > > > > > > 2023 about swap & zswap (hope I am not too late).
> > > > > > > > >
> > > > > > > > > ==================== Intro ====================
> > > > > > > > > Currently, using zswap is dependent on swapfiles in an unnecessary
> > > > > > > > > way. To use zswap, you need a swapfile configured (even if the space
> > > > > > > > > will not be used) and zswap is restricted by its size. When pages
> > > > > > > > > reside in zswap, the corresponding swap entry in the swapfile cannot
> > > > > > > > > be used, and is essentially wasted. We also go through unnecessary
> > > > > > > > > code paths when using zswap, such as finding and allocating a swap
> > > > > > > > > entry on the swapout path, or readahead in the swapin path. I am
> > > > > > > > > proposing a swapping abstraction layer that would allow us to remove
> > > > > > > > > zswap's dependency on swapfiles. This can be done by introducing a
> > > > > > > > > data structure between the actual swapping implementation (swapfiles,
> > > > > > > > > zswap) and the rest of the MM code.
> > > > > > > > >
> > > > > > > > > ==================== Objective ====================
> > > > > > > > > Enabling the use of zswap without a backing swapfile, which makes
> > > > > > > > > zswap useful for a wider variety of use cases. Also, when zswap is
> > > > > > > > > used with a swapfile, the pages in zswap do not use up space in the
> > > > > > > > > swapfile, so the overall swapping capacity increases.
> > > > > > > > >
> > > > > > > > > ==================== Idea ====================
> > > > > > > > > Introduce a data structure, which I currently call a swap_desc, as an
> > > > > > > > > abstraction layer between swapping implementation and the rest of MM
> > > > > > > > > code. Page tables & page caches would store a swap id (encoded as a
> > > > > > > > > swp_entry_t) instead of directly storing the swap entry associated
> > > > > > > > > with the swapfile. This swap id maps to a struct swap_desc, which acts
> > > > > > > > > as our abstraction layer. All MM code not concerned with swapping
> > > > > > > > > details would operate in terms of swap descs. The swap_desc can point
> > > > > > > > > to either a normal swap entry (associated with a swapfile) or a zswap
> > > > > > > > > entry. It can also include all non-backend specific operations, such
> > > > > > > > > as the swapcache (which would be a simple pointer in swap_desc), swap
> > > > > > > > > counting, etc. It creates a clear, nice abstraction layer between MM
> > > > > > > > > code and the actual swapping implementation.
> > > > > > > >
> > > > > > > > How will the swap_desc be allocated? Dynamically or preallocated? Is
> > > > > > > > it 1:1 mapped to the swap slots on swap devices (whatever it is
> > > > > > > > backed, for example, zswap, swap partition, swapfile, etc)?
> > > > > > >
> > > > > > > I imagine swap_desc's would be dynamically allocated when we need to
> > > > > > > swap something out. When allocated, a swap_desc would either point to
> > > > > > > a zswap_entry (if available), or a swap slot otherwise. In this case,
> > > > > > > it would be 1:1 mapped to swapped out pages, not the swap slots on
> > > > > > > devices.
> > > > > >
> > > > > > It makes sense to be 1:1 mapped to swapped out pages if the swapfile
> > > > > > is used as the back of zswap.
> > > > > >
> > > > > > >
> > > > > > > I know that it might not be ideal to make allocations on the reclaim
> > > > > > > path (although it would be a small-ish slab allocation so we might be
> > > > > > > able to get away with it), but otherwise we would have statically
> > > > > > > allocated swap_desc's for all swap slots on a swap device, even unused
> > > > > > > ones, which I imagine is too expensive. Also for things like zswap, it
> > > > > > > doesn't really make sense to preallocate at all.
> > > > > >
> > > > > > Yeah, it is not perfect to allocate memory in the reclamation path. We
> > > > > > do have such cases, but the fewer the better IMHO.
> > > > >
> > > > > Yeah. Perhaps we can preallocate a pool of swap_desc's on top of the
> > > > > slab cache, idk if that makes sense, or if there is a way to tell slab
> > > > > to proactively refill a cache.
> > > > >
> > > > > I am open to suggestions here. I don't think we should/can preallocate
> > > > > the swap_desc's, and we cannot completely eliminate the allocations in
> > > > > the reclaim path. We can only try to minimize them through caching,
> > > > > etc. Right?
> > > >
> > > > Yeah, reallocation should not work. But I'm not sure whether caching
> > > > works well for this case or not either. I'm supposed that you were
> > > > thinking about something similar with pcp. When the available number
> > > > of elements is lower than a threshold, refill the cache. It should
> > > > work well with moderate memory pressure. But I'm not sure how it would
> > > > behave with severe memory pressure, particularly when  anonymous
> > > > memory dominated the memory usage. Or maybe dynamic allocation works
> > > > well, we are just over-engineered.
> > >
> > > Yeah it would be interesting to look into whether the swap_desc
> > > allocation will be a bottleneck. Definitely something to look out for.
> > > I share your thoughts about wanting to do something about it but also
> > > not wanting to over-engineer it.
> >
> > I'm not too concerned by this. It's a PF_MEMALLOC allocation, meaning
> > it's not subject to watermarks. And the swapped page is freed right
> > afterwards. As long as the compression delta exceeds the size of
> > swap_desc, the process is a net reduction in allocated memory. For
> > regular swap, the only requirement is that swap_desc < page_size() :-)
> >
> > To put this into perspective, the zswap backends allocate backing
> > pages on-demand during reclaim. zsmalloc also kmallocs metadata in
> > that path. We haven't had any issues with this in production, even
> > under fairly severe memory pressure scenarios.
>
> Right. The only problem would be for pages that do not compress well
> in zswap, in which case we might not end up freeing memory. As you
> said, this is already happening today with zswap tho.
>
> >
> > > > > > > > > ==================== Benefits ====================
> > > > > > > > > This work enables using zswap without a backing swapfile and increases
> > > > > > > > > the swap capacity when zswap is used with a swapfile. It also creates
> > > > > > > > > a separation that allows us to skip code paths that don't make sense
> > > > > > > > > in the zswap path (e.g. readahead). We get to drop zswap's rbtree
> > > > > > > > > which might result in better performance (less lookups, less lock
> > > > > > > > > contention).
> > > > > > > > >
> > > > > > > > > The abstraction layer also opens the door for multiple cleanups (e.g.
> > > > > > > > > removing swapper address spaces, removing swap count continuation
> > > > > > > > > code, etc). Another nice cleanup that this work enables would be
> > > > > > > > > separating the overloaded swp_entry_t into two distinct types: one for
> > > > > > > > > things that are stored in page tables / caches, and for actual swap
> > > > > > > > > entries. In the future, we can potentially further optimize how we use
> > > > > > > > > the bits in the page tables instead of sticking everything into the
> > > > > > > > > current type/offset format.
> > > > > > > > >
> > > > > > > > > Another potential win here can be swapoff, which can be more practical
> > > > > > > > > by directly scanning all swap_desc's instead of going through page
> > > > > > > > > tables and shmem page caches.
> > > > > > > > >
> > > > > > > > > Overall zswap becomes more accessible and available to a wider range
> > > > > > > > > of use cases.
> > > > > > > >
> > > > > > > > How will you handle zswap writeback? Zswap may writeback to the backed
> > > > > > > > swap device IIUC. Assuming you have both zswap and swapfile, they are
> > > > > > > > separate devices with this design, right? If so, is the swapfile still
> > > > > > > > the writeback target of zswap? And if it is the writeback target, what
> > > > > > > > if swapfile is full?
> > > > > > >
> > > > > > > When we try to writeback from zswap, we try to allocate a swap slot in
> > > > > > > the swapfile, and switch the swap_desc to point to that instead. The
> > > > > > > process would be transparent to the rest of MM (page tables, page
> > > > > > > cache, etc). If the swapfile is full, then there's really nothing we
> > > > > > > can do, reclaim fails and we start OOMing. I imagine this is the same
> > > > > > > behavior as today when swap is full, the difference would be that we
> > > > > > > have to fill both zswap AND the swapfile to get to the OOMing point,
> > > > > > > so an overall increased swapping capacity.
> > > > > >
> > > > > > When zswap is full, but swapfile is not yet, will the swap try to
> > > > > > writeback zswap to swapfile to make more room for zswap or just swap
> > > > > > out to swapfile directly?
> > > > > >
> > > > >
> > > > > The current behavior is that we swap to swapfile directly in this
> > > > > case, which is far from ideal as we break LRU ordering by skipping
> > > > > zswap. I believe this should be addressed, but not as part of this
> > > > > effort. The work to make zswap respect the LRU ordering by writing
> > > > > back from zswap to make room can be done orthogonal to this effort. I
> > > > > believe Johannes was looking into this at some point.
> >
> > Actually, zswap already does LRU writeback when the pool is full. Nhat
> > Pham (CCd) recently upstreamed the LRU implementation for zsmalloc, so
> > as of today all backends support this.
> >
> > There are still a few quirks in zswap that can cause rejections which
> > bypass the LRU that need fixing. But for the most part LRU writeback
> > to the backing file is the default behavior.
>
> Right, I was specifically talking about this case. When zswap is full
> it rejects incoming pages and they go directly to the swapfile, but we
> also kickoff writeback, so this only happens until we do some LRU
> writeback. I guess I should have been more clear here. Thanks for
> clarifying and correcting.
>
> >
> > > > Other than breaking LRU ordering, I'm also concerned about the
> > > > potential deteriorating performance when writing/reading from swapfile
> > > > when zswap is full. The zswap->swapfile order should be able to
> > > > maintain a consistent performance for userspace.
> > >
> > > Right. This happens today anyway AFAICT, when zswap is full we just
> > > fallback to writing to swapfile, so this would not be a behavior
> > > change. I agree it should be addressed anyway.
> > >
> > > >
> > > > But anyway I don't have the data from real life workload to back the
> > > > above points. If you or Johannes could share some real data, that
> > > > would be very helpful to make the decisions.
> > >
> > > I actually don't, since we mostly run zswap without a backing
> > > swapfile. Perhaps Johannes might be able to have some data on this (or
> > > anyone using zswap with a backing swapfile).
> >
> > Due to LRU writeback, the latency increase when zswap spills its
> > coldest entries into backing swap is fairly linear, as you may
> > expect. We have some limited production data on this from the
> > webservers.
> >
> > The biggest challenge in this space is properly sizing the zswap pool,
> > such that it's big enough to hold the warm set that the workload is
> > most latency-sensitive too, yet small enough such that the cold pages
> > get spilled to backing swap. Nhat is working on improving this.
> >
> > That said, I think this discussion is orthogonal to the proposed
> > topic. zswap spills to backing swap in LRU order as of today. The
> > LRU/pool size tweaking is an optimization to get smarter zswap/swap
> > placement according to access frequency. The proposed swap descriptor
> > is an optimization to get better disk utilization, the ability to run
> > zswap without backing swap, and a dramatic speedup in swapoff time.
>
> Fully agree.
>
> >
> > > > > > > > Anyway I'm interested in attending the discussion for this topic.
> > > > > > >
> > > > > > > Great! Looking forward to discuss this more!
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > ==================== Cost ====================
> > > > > > > > > The obvious downside of this is added memory overhead, specifically
> > > > > > > > > for users that use swapfiles without zswap. Instead of paying one byte
> > > > > > > > > (swap_map) for every potential page in the swapfile (+ swap count
> > > > > > > > > continuation), we pay the size of the swap_desc for every page that is
> > > > > > > > > actually in the swapfile, which I am estimating can be roughly around
> > > > > > > > > 24 bytes or so, so maybe 0.6% of swapped out memory. The overhead only
> > > > > > > > > scales with pages actually swapped out. For zswap users, it should be
> > > > > > > > > a win (or at least even) because we get to drop a lot of fields from
> > > > > > > > > struct zswap_entry (e.g. rbtree, index, etc).
> >
> > Shifting the cost from O(swapspace) to O(swapped) could be a win for
> > many regular swap users too.
> >
> > There are the legacy setups that provision 2*RAM worth of swap as an
> > emergency overflow that is then rarely used.
> >
> > We have a setups that swap to disk more proactively, but we also
> > overprovision those in terms of swap space due to the cliff behavior
> > when swap fills up and the VM runs out of options.
> >
> > To make a fair comparison, you really have to take average swap
> > utilization into account. And I doubt that's very high.
>
> Yeah I was looking for some data here, but it varies heavily based on
> the use case, so I opted to only state the overhead of the swap
> descriptor without directly comparing it to the current overhead.
>
> >
> > In terms of worst-case behavior, +0.8% per swapped page doesn't sound
> > like a show-stopper to me. Especially when compared to zswap's current
> > O(swapped) waste of disk space.
>
> Yeah for zswap users this should be a win on most/all fronts, even
> memory overhead, as we will end up trimming struct zswap_entry which
> is also O(swapped) memory overhead. It should also make zswap
> available for more use cases. You don't need to provision and
> configure swap space, you just need to turn zswap on.
>
> >
> > > > > > > > > Another potential concern is readahead. With this design, we have no
> > > > > > > > > way to get a swap_desc given a swap entry (type & offset). We would
> > > > > > > > > need to maintain a reverse mapping, adding a little bit more overhead,
> > > > > > > > > or search all swapped out pages instead :). A reverse mapping might
> > > > > > > > > pump the per-swapped page overhead to ~32 bytes (~0.8% of swapped out
> > > > > > > > > memory).
> > > > > > > > >
> > > > > > > > > ==================== Bottom Line ====================
> > > > > > > > > It would be nice to discuss the potential here and the tradeoffs. I
> > > > > > > > > know that other folks using zswap (or interested in using it) may find
> > > > > > > > > this very useful. I am sure I am missing some context on why things
> > > > > > > > > are the way they are, and perhaps some obvious holes in my story.
> > > > > > > > > Looking forward to discussing this with anyone interested :)
> > > > > > > > >
> > > > > > > > > I think Johannes may be interested in attending this discussion, since
> > > > > > > > > a lot of ideas here are inspired by discussions I had with him :)

Hi everyone,

I came across this interesting proposal and I would like to
participate in the discussion. I think it will be useful/overlap with
some projects we are currently planning in Android.

Thanks,
Kalesh

> >
> > Thanks!
>


  reply	other threads:[~2023-02-28  4:29 UTC|newest]

Thread overview: 105+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-18 22:38 [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap Yosry Ahmed
2023-02-19  4:31 ` Matthew Wilcox
2023-02-19  9:34   ` Yosry Ahmed
2023-02-28 23:22   ` Chris Li
2023-03-01  0:08     ` Matthew Wilcox
2023-03-01 23:22       ` Chris Li
2023-02-21 18:39 ` Yang Shi
2023-02-21 18:56   ` Yosry Ahmed
2023-02-21 19:26     ` Yang Shi
2023-02-21 19:46       ` Yosry Ahmed
2023-02-21 23:34         ` Yang Shi
2023-02-21 23:38           ` Yosry Ahmed
2023-02-22 16:57             ` Johannes Weiner
2023-02-22 22:46               ` Yosry Ahmed
2023-02-28  4:29                 ` Kalesh Singh [this message]
2023-02-28  8:09                   ` Yosry Ahmed
2023-02-28  4:54 ` Sergey Senozhatsky
2023-02-28  8:12   ` Yosry Ahmed
2023-02-28 23:29     ` Minchan Kim
2023-03-02  0:58       ` Yosry Ahmed
2023-03-02  1:25         ` Yosry Ahmed
2023-03-02 17:05         ` Chris Li
2023-03-02 17:47         ` Chris Li
2023-03-02 18:15           ` Johannes Weiner
2023-03-02 18:56             ` Chris Li
2023-03-02 18:23           ` Rik van Riel
2023-03-02 21:42             ` Chris Li
2023-03-02 22:36               ` Rik van Riel
2023-03-02 22:55                 ` Yosry Ahmed
2023-03-03  4:05                   ` Chris Li
2023-03-03  0:01                 ` Chris Li
2023-03-02 16:58       ` Chris Li
2023-03-01 10:44     ` Sergey Senozhatsky
2023-03-02  1:01       ` Yosry Ahmed
2023-02-28 23:11 ` Chris Li
2023-03-02  0:30   ` Yosry Ahmed
2023-03-02  1:00     ` Yosry Ahmed
2023-03-02 16:51     ` Chris Li
2023-03-03  0:33     ` Minchan Kim
2023-03-03  0:49       ` Yosry Ahmed
2023-03-03  1:25         ` Minchan Kim
2023-03-03 17:15           ` Yosry Ahmed
2023-03-09 12:48     ` Huang, Ying
2023-03-09 19:58       ` Chris Li
2023-03-09 20:19       ` Yosry Ahmed
2023-03-10  3:06         ` Huang, Ying
2023-03-10 23:14           ` Chris Li
2023-03-13  1:10             ` Huang, Ying
2023-03-15  7:41               ` Yosry Ahmed
2023-03-16  1:42                 ` Huang, Ying
2023-03-11  1:06           ` Yosry Ahmed
2023-03-13  2:12             ` Huang, Ying
2023-03-15  8:01               ` Yosry Ahmed
2023-03-16  7:50                 ` Huang, Ying
2023-03-17 10:19                   ` Yosry Ahmed
2023-03-17 18:19                     ` Chris Li
2023-03-17 18:23                       ` Yosry Ahmed
2023-03-20  2:55                     ` Huang, Ying
2023-03-20  6:25                       ` Chris Li
2023-03-23  0:56                         ` Huang, Ying
2023-03-23  6:46                           ` Chris Li
2023-03-23  6:56                             ` Huang, Ying
2023-03-23 18:28                               ` Chris Li
2023-03-23 18:40                                 ` Yosry Ahmed
2023-03-23 19:49                                   ` Chris Li
2023-03-23 19:54                                     ` Yosry Ahmed
2023-03-23 21:10                                       ` Chris Li
2023-03-24 17:28                                       ` Chris Li
2023-03-22  5:56                       ` Yosry Ahmed
2023-03-23  1:48                         ` Huang, Ying
2023-03-23  2:21                           ` Yosry Ahmed
2023-03-23  3:16                             ` Huang, Ying
2023-03-23  3:27                               ` Yosry Ahmed
2023-03-23  5:37                                 ` Huang, Ying
2023-03-23 15:18                                   ` Yosry Ahmed
2023-03-24  2:37                                     ` Huang, Ying
2023-03-24  7:28                                       ` Yosry Ahmed
2023-03-24 17:23                                         ` Chris Li
2023-03-27  1:23                                           ` Huang, Ying
2023-03-28  5:54                                             ` Yosry Ahmed
2023-03-28  6:20                                               ` Huang, Ying
2023-03-28  6:29                                                 ` Yosry Ahmed
2023-03-28  6:59                                                   ` Huang, Ying
2023-03-28  7:59                                                     ` Yosry Ahmed
2023-03-28 14:14                                                       ` Johannes Weiner
2023-03-28 19:59                                                         ` Yosry Ahmed
2023-03-28 21:22                                                           ` Chris Li
2023-03-28 21:30                                                             ` Yosry Ahmed
2023-03-28 20:50                                                       ` Chris Li
2023-03-28 21:01                                                         ` Yosry Ahmed
2023-03-28 21:32                                                           ` Chris Li
2023-03-28 21:44                                                             ` Yosry Ahmed
2023-03-28 22:01                                                               ` Chris Li
2023-03-28 22:02                                                                 ` Yosry Ahmed
2023-03-29  1:31                                                               ` Huang, Ying
2023-03-29  1:41                                                                 ` Yosry Ahmed
2023-03-29 16:04                                                                   ` Chris Li
2023-04-04  8:24                                                                     ` Huang, Ying
2023-04-04  8:10                                                                   ` Huang, Ying
2023-04-04  8:47                                                                     ` Yosry Ahmed
2023-04-06  1:40                                                                       ` Huang, Ying
2023-03-29 15:22                                                                 ` Chris Li
2023-03-10  2:07 ` Luis Chamberlain
2023-03-10  2:15   ` Yosry Ahmed
2023-05-12  3:07 ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAC_TJve7e=sz4uPDuRvauj1hr=evOUWbSoz91wniSQYUbv0ajA@mail.gmail.com' \
    --to=kaleshsingh@google.com \
    --cc=akailash@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=ddstreet@ieee.org \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=peterx@redhat.com \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=sjenning@redhat.com \
    --cc=vitaly.wool@konsulko.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.