linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Kent Overstreet <kent.overstreet@linux.dev>,
	Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@suse.de>, Davidlohr Bueso <dave@stgolabs.net>,
	Matthew Wilcox <willy@infradead.org>,
	"Liam R. Howlett" <liam.howlett@oracle.com>,
	David Vernet <void@manifault.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Laurent Dufour <ldufour@linux.ibm.com>,
	Peter Xu <peterx@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Jens Axboe <axboe@kernel.dk>,
	mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org,
	changbin.du@intel.com, ytcoode@gmail.com,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Benjamin Segall <bsegall@google.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Valentin Schneider <vschneid@redhat.com>,
	Christopher Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	42.hyeyoo@gmail.com, Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Shakeel Butt <shakeelb@google.com>,
	Muchun Song <songmuchun@bytedance.com>,
	arnd@arndb.de, jbaron@akamai.com,
	David Rientjes <rientjes@google.com>,
	Minchan Kim <minchan@google.com>,
	Kalesh Singh <kaleshsingh@google.com>,
	kernel-team <kernel-team@android.com>,
	linux-mm <linux-mm@kvack.org>,
	iommu@lists.linux.dev, kasan-dev@googlegroups.com,
	io-uring@vger.kernel.org, linux-arch@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-modules@vger.kernel.org,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 11/30] mm: introduce slabobj_ext to support slab object extensions
Date: Thu, 1 Sep 2022 17:23:38 -0700	[thread overview]
Message-ID: <CAJuCfpECU8NsC_kUSE7ef33_HUkZP5S2rEbxOvfnmM2Qb4TKBA@mail.gmail.com> (raw)
In-Reply-To: <YxFB3tlMqakx+hiL@P9FQF9L96D.corp.robot.car>

On Thu, Sep 1, 2022 at 4:36 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Tue, Aug 30, 2022 at 02:49:00PM -0700, Suren Baghdasaryan wrote:
> > Currently slab pages can store only vectors of obj_cgroup pointers in
> > page->memcg_data. Introduce slabobj_ext structure to allow more data
> > to be stored for each slab object. Wraps obj_cgroup into slabobj_ext
> > to support current functionality while allowing to extend slabobj_ext
> > in the future.
> >
> > Note: ideally the config dependency should be turned the other way around:
> > MEMCG should depend on SLAB_OBJ_EXT and {page|slab|folio}.memcg_data would
> > be renamed to something like {page|slab|folio}.objext_data. However doing
> > this in RFC would introduce considerable churn unrelated to the overall
> > idea, so avoiding this until v1.
>
> Hi Suren!

Hi Roman,

>
> I'd say CONFIG_MEMCG_KMEM and CONFIG_YOUR_NEW_STUFF should both depend on
> SLAB_OBJ_EXT.
> CONFIG_MEMCG_KMEM depend on CONFIG_MEMCG anyway.

Yes, I agree. I wanted to mention here that the current dependency is
incorrect and should be reworked. Having both depending on
SLAB_OBJ_EXT seems like the right approach.

>
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> >  include/linux/memcontrol.h |  18 ++++--
> >  init/Kconfig               |   5 ++
> >  mm/kfence/core.c           |   2 +-
> >  mm/memcontrol.c            |  60 ++++++++++---------
> >  mm/page_owner.c            |   2 +-
> >  mm/slab.h                  | 119 +++++++++++++++++++++++++------------
> >  6 files changed, 131 insertions(+), 75 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 6257867fbf95..315399f77173 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -227,6 +227,14 @@ struct obj_cgroup {
> >       };
> >  };
> >
> > +/*
> > + * Extended information for slab objects stored as an array in page->memcg_data
> > + * if MEMCG_DATA_OBJEXTS is set.
> > + */
> > +struct slabobj_ext {
> > +     struct obj_cgroup *objcg;
> > +} __aligned(8);
>
> Why do we need this aligment requirement?

To save space by avoiding padding, however, all members today will be
pointers, so it's meaningless and we can safely drop it.

>
> > +
> >  /*
> >   * The memory controller data structure. The memory controller controls both
> >   * page cache and RSS per cgroup. We would eventually like to provide
> > @@ -363,7 +371,7 @@ extern struct mem_cgroup *root_mem_cgroup;
> >
> >  enum page_memcg_data_flags {
> >       /* page->memcg_data is a pointer to an objcgs vector */
> > -     MEMCG_DATA_OBJCGS = (1UL << 0),
> > +     MEMCG_DATA_OBJEXTS = (1UL << 0),
> >       /* page has been accounted as a non-slab kernel page */
> >       MEMCG_DATA_KMEM = (1UL << 1),
> >       /* the next bit after the last actual flag */
> > @@ -401,7 +409,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio)
> >       unsigned long memcg_data = folio->memcg_data;
> >
> >       VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
> > -     VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
> > +     VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio);
> >       VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio);
> >
> >       return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
> > @@ -422,7 +430,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
> >       unsigned long memcg_data = folio->memcg_data;
> >
> >       VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
> > -     VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
> > +     VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio);
> >       VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio);
> >
> >       return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
> > @@ -517,7 +525,7 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> >        */
> >       unsigned long memcg_data = READ_ONCE(page->memcg_data);
> >
> > -     if (memcg_data & MEMCG_DATA_OBJCGS)
> > +     if (memcg_data & MEMCG_DATA_OBJEXTS)
> >               return NULL;
> >
> >       if (memcg_data & MEMCG_DATA_KMEM) {
> > @@ -556,7 +564,7 @@ static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *ob
> >  static inline bool folio_memcg_kmem(struct folio *folio)
> >  {
> >       VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page);
> > -     VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio);
> > +     VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio);
> >       return folio->memcg_data & MEMCG_DATA_KMEM;
> >  }
> >
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 532362fcfe31..82396d7a2717 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -958,6 +958,10 @@ config MEMCG
> >       help
> >         Provides control over the memory footprint of tasks in a cgroup.
> >
> > +config SLAB_OBJ_EXT
> > +     bool
> > +     depends on MEMCG
> > +
> >  config MEMCG_SWAP
> >       bool
> >       depends on MEMCG && SWAP
> > @@ -966,6 +970,7 @@ config MEMCG_SWAP
> >  config MEMCG_KMEM
> >       bool
> >       depends on MEMCG && !SLOB
> > +     select SLAB_OBJ_EXT
> >       default y
> >
> >  config BLK_CGROUP
> > diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> > index c252081b11df..c0958e4a32e2 100644
> > --- a/mm/kfence/core.c
> > +++ b/mm/kfence/core.c
> > @@ -569,7 +569,7 @@ static unsigned long kfence_init_pool(void)
> >               __folio_set_slab(slab_folio(slab));
> >  #ifdef CONFIG_MEMCG
> >               slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg |
> > -                                MEMCG_DATA_OBJCGS;
> > +                                MEMCG_DATA_OBJEXTS;
> >  #endif
> >       }
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index b69979c9ced5..3f407ef2f3f1 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2793,7 +2793,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg)
> >       folio->memcg_data = (unsigned long)memcg;
> >  }
> >
> > -#ifdef CONFIG_MEMCG_KMEM
> > +#ifdef CONFIG_SLAB_OBJ_EXT
> >  /*
> >   * The allocated objcg pointers array is not accounted directly.
> >   * Moreover, it should not come from DMA buffer and is not readily
> > @@ -2801,38 +2801,20 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg)
> >   */
> >  #define OBJCGS_CLEAR_MASK    (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)
> >
> > -/*
> > - * mod_objcg_mlstate() may be called with irq enabled, so
> > - * mod_memcg_lruvec_state() should be used.
> > - */
> > -static inline void mod_objcg_mlstate(struct obj_cgroup *objcg,
> > -                                  struct pglist_data *pgdat,
> > -                                  enum node_stat_item idx, int nr)
> > -{
> > -     struct mem_cgroup *memcg;
> > -     struct lruvec *lruvec;
> > -
> > -     rcu_read_lock();
> > -     memcg = obj_cgroup_memcg(objcg);
> > -     lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > -     mod_memcg_lruvec_state(lruvec, idx, nr);
> > -     rcu_read_unlock();
> > -}
> > -
> > -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
> > -                              gfp_t gfp, bool new_slab)
> > +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> > +                     gfp_t gfp, bool new_slab)
> >  {
> >       unsigned int objects = objs_per_slab(s, slab);
> >       unsigned long memcg_data;
> >       void *vec;
> >
> >       gfp &= ~OBJCGS_CLEAR_MASK;
> > -     vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
> > +     vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp,
> >                          slab_nid(slab));
> >       if (!vec)
> >               return -ENOMEM;
> >
> > -     memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS;
> > +     memcg_data = (unsigned long) vec | MEMCG_DATA_OBJEXTS;
> >       if (new_slab) {
> >               /*
> >                * If the slab is brand new and nobody can yet access its
> > @@ -2843,7 +2825,7 @@ int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
> >       } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) {
> >               /*
> >                * If the slab is already in use, somebody can allocate and
> > -              * assign obj_cgroups in parallel. In this case the existing
> > +              * assign slabobj_exts in parallel. In this case the existing
> >                * objcg vector should be reused.
> >                */
> >               kfree(vec);
> > @@ -2853,6 +2835,26 @@ int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
> >       kmemleak_not_leak(vec);
> >       return 0;
> >  }
> > +#endif /* CONFIG_SLAB_OBJ_EXT */
> > +
> > +#ifdef CONFIG_MEMCG_KMEM
> > +/*
> > + * mod_objcg_mlstate() may be called with irq enabled, so
> > + * mod_memcg_lruvec_state() should be used.
> > + */
> > +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg,
> > +                                  struct pglist_data *pgdat,
> > +                                  enum node_stat_item idx, int nr)
> > +{
> > +     struct mem_cgroup *memcg;
> > +     struct lruvec *lruvec;
> > +
> > +     rcu_read_lock();
> > +     memcg = obj_cgroup_memcg(objcg);
> > +     lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > +     mod_memcg_lruvec_state(lruvec, idx, nr);
> > +     rcu_read_unlock();
> > +}
> >
> >  static __always_inline
> >  struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p)
> > @@ -2863,18 +2865,18 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p)
> >        * slab->memcg_data.
> >        */
> >       if (folio_test_slab(folio)) {
> > -             struct obj_cgroup **objcgs;
> > +             struct slabobj_ext *obj_exts;
> >               struct slab *slab;
> >               unsigned int off;
> >
> >               slab = folio_slab(folio);
> > -             objcgs = slab_objcgs(slab);
> > -             if (!objcgs)
> > +             obj_exts = slab_obj_exts(slab);
> > +             if (!obj_exts)
> >                       return NULL;
> >
> >               off = obj_to_index(slab->slab_cache, slab, p);
> > -             if (objcgs[off])
> > -                     return obj_cgroup_memcg(objcgs[off]);
> > +             if (obj_exts[off].objcg)
> > +                     return obj_cgroup_memcg(obj_exts[off].objcg);
> >
> >               return NULL;
> >       }
> > diff --git a/mm/page_owner.c b/mm/page_owner.c
> > index e4c6f3f1695b..fd4af1ad34b8 100644
> > --- a/mm/page_owner.c
> > +++ b/mm/page_owner.c
> > @@ -353,7 +353,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret,
> >       if (!memcg_data)
> >               goto out_unlock;
> >
> > -     if (memcg_data & MEMCG_DATA_OBJCGS)
> > +     if (memcg_data & MEMCG_DATA_OBJEXTS)
> >               ret += scnprintf(kbuf + ret, count - ret,
> >                               "Slab cache page\n");
> >
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 4ec82bec15ec..c767ce3f0fe2 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -422,36 +422,94 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla
> >       return false;
> >  }
> >
> > +#ifdef CONFIG_SLAB_OBJ_EXT
> > +
> > +static inline bool is_kmem_only_obj_ext(void)
> > +{
> >  #ifdef CONFIG_MEMCG_KMEM
> > +     return sizeof(struct slabobj_ext) == sizeof(struct obj_cgroup *);
> > +#else
> > +     return false;
> > +#endif
> > +}
> > +
> >  /*
> > - * slab_objcgs - get the object cgroups vector associated with a slab
> > + * slab_obj_exts - get the pointer to the slab object extension vector
> > + * associated with a slab.
> >   * @slab: a pointer to the slab struct
> >   *
> > - * Returns a pointer to the object cgroups vector associated with the slab,
> > + * Returns a pointer to the object extension vector associated with the slab,
> >   * or NULL if no such vector has been associated yet.
> >   */
> > -static inline struct obj_cgroup **slab_objcgs(struct slab *slab)
> > +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab)
> >  {
> >       unsigned long memcg_data = READ_ONCE(slab->memcg_data);
> >
> > -     VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS),
> > +     VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJEXTS),
> >                                                       slab_page(slab));
> >       VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab));
> >
> > -     return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
> > +     return (struct slabobj_ext *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
> >  }
> >
> > -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
> > -                              gfp_t gfp, bool new_slab);
> > -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> > -                  enum node_stat_item idx, int nr);
> > +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> > +                     gfp_t gfp, bool new_slab);
> >
> > -static inline void memcg_free_slab_cgroups(struct slab *slab)
> > +static inline void free_slab_obj_exts(struct slab *slab)
> >  {
> > -     kfree(slab_objcgs(slab));
> > +     struct slabobj_ext *obj_exts;
> > +
> > +     if (!memcg_kmem_enabled() && is_kmem_only_obj_ext())
> > +             return;
>
> Hm, not sure I understand this. I kmem is disabled and is_kmem_only_obj_ext()
> is true, shouldn't slab->memcg_data == NULL (always)?

So, the logic was to skip freeing when the only possible objects in
slab->memcg_data are "struct obj_cgroup" and kmem is disabled.
Otherwise there are other objects stored in slab->memcg_data which
have to be freed. Did I make it more complicated than it should have
been?

>
> > +
> > +     obj_exts = slab_obj_exts(slab);
> > +     kfree(obj_exts);
> >       slab->memcg_data = 0;
> >  }
> >
> > +static inline void prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p)
> > +{
> > +     struct slab *slab;
> > +
> > +     /* If kmem is the only extension then the vector will be created conditionally */
> > +     if (is_kmem_only_obj_ext())
> > +             return;
> > +
> > +     slab = virt_to_slab(p);
> > +     if (!slab_obj_exts(slab))
> > +             WARN(alloc_slab_obj_exts(slab, s, flags, false),
> > +                     "%s, %s: Failed to create slab extension vector!\n",
> > +                     __func__, s->name);
> > +}
>
> This looks a bit crypric: the action is wrapped into WARN() and the rest is a set
> of (semi-)static checks. Can we, please, invert it? E.g. something like:
>
> if (slab_alloc_tracking_enabled()) {
>         slab = virt_to_slab(p);
>         if (!slab_obj_exts(slab))
>                 WARN(alloc_slab_obj_exts(slab, s, flags, false),
>                 "%s, %s: Failed to create slab extension vector!\n",
>                 __func__, s->name);
> }

Yeah, this is much more readable. Thanks for the suggestion and for
reviewing the code!

>
> The rest looks good to me.
>
> Thank you!
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

  reply	other threads:[~2022-09-02  0:23 UTC|newest]

Thread overview: 138+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-30 21:48 [RFC PATCH 00/30] Code tagging framework and applications Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 01/30] kernel/module: move find_kallsyms_symbol_value declaration Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 02/30] lib/string_helpers: Drop space in string_get_size's output Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 03/30] Lazy percpu counters Suren Baghdasaryan
2022-08-31 10:02   ` Mel Gorman
2022-08-31 15:37     ` Suren Baghdasaryan
2022-08-31 16:20     ` Kent Overstreet
2022-09-01  6:51   ` Peter Zijlstra
2022-09-01 14:32     ` Kent Overstreet
2022-09-01 14:48       ` Steven Rostedt
2022-09-01 15:43         ` Kent Overstreet
2022-09-01 18:59       ` Peter Zijlstra
2022-08-30 21:48 ` [RFC PATCH 04/30] scripts/kallysms: Always include __start and __stop symbols Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 05/30] lib: code tagging framework Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 06/30] lib: code tagging module support Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 07/30] lib: add support for allocation tagging Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 08/30] lib: introduce page " Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 09/30] change alloc_pages name in dma_map_ops to avoid name conflicts Suren Baghdasaryan
2022-08-30 21:48 ` [RFC PATCH 10/30] mm: enable page allocation tagging for __get_free_pages and alloc_pages Suren Baghdasaryan
2022-08-31 10:11   ` Mel Gorman
2022-08-31 15:45     ` Suren Baghdasaryan
2022-08-31 15:52       ` Suren Baghdasaryan
2022-08-31 17:46     ` Kent Overstreet
2022-09-01  1:07       ` Suren Baghdasaryan
2022-09-01  7:41       ` Peter Zijlstra
2022-08-30 21:49 ` [RFC PATCH 11/30] mm: introduce slabobj_ext to support slab object extensions Suren Baghdasaryan
2022-09-01 23:35   ` Roman Gushchin
2022-09-02  0:23     ` Suren Baghdasaryan [this message]
2022-08-30 21:49 ` [RFC PATCH 12/30] mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext creation Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 13/30] mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 14/30] mm: prevent slabobj_ext allocations for slabobj_ext and kmem_cache objects Suren Baghdasaryan
2022-09-01 23:40   ` Roman Gushchin
2022-09-02  0:24     ` Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 15/30] lib: introduce slab allocation tagging Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 16/30] mm: enable slab allocation tagging for kmalloc and friends Suren Baghdasaryan
2022-09-01 23:50   ` Roman Gushchin
2022-08-30 21:49 ` [RFC PATCH 17/30] lib/string.c: strsep_no_empty() Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 18/30] codetag: add codetag query helper functions Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 19/30] move stack capture functionality into a separate function for reuse Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 20/30] lib: introduce support for storing code tag context Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 21/30] lib: implement context capture support for page and slab allocators Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 22/30] Code tagging based fault injection Suren Baghdasaryan
2022-08-31  1:51   ` Randy Dunlap
2022-08-31 15:56     ` Suren Baghdasaryan
2022-08-31 10:37   ` Dmitry Vyukov
2022-08-31 15:51     ` Suren Baghdasaryan
2022-08-31 17:30     ` Kent Overstreet
2022-09-01  8:43       ` Dmitry Vyukov
2022-08-30 21:49 ` [RFC PATCH 23/30] timekeeping: Add a missing include Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 24/30] wait: Clean up waitqueue_entry initialization Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 25/30] lib/time_stats: New library for statistics on events Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 26/30] bcache: Convert to lib/time_stats Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 27/30] Code tagging based latency tracking Suren Baghdasaryan
2022-08-31  1:53   ` Randy Dunlap
2022-08-31 15:55     ` Suren Baghdasaryan
2022-09-01  7:11   ` Peter Zijlstra
2022-09-01 14:43     ` Kent Overstreet
2022-09-01 21:38   ` Steven Rostedt
2022-09-01 21:46     ` Steven Rostedt
2022-09-01 21:54     ` Kent Overstreet
2022-09-01 22:34       ` Steven Rostedt
2022-09-01 22:55         ` Kent Overstreet
2022-09-02  0:23           ` Steven Rostedt
2022-09-02  1:35             ` Kent Overstreet
2022-09-02  1:59               ` Steven Rostedt
2022-08-30 21:49 ` [RFC PATCH 28/30] Improved symbolic error names Suren Baghdasaryan
2022-09-01 23:19   ` Joe Perches
2022-09-01 23:26     ` Kent Overstreet
2022-08-30 21:49 ` [RFC PATCH 29/30] dyndbg: Convert to code tagging Suren Baghdasaryan
2022-08-30 21:49 ` [RFC PATCH 30/30] MAINTAINERS: Add entries for code tagging & related Suren Baghdasaryan
2022-08-31  7:38 ` [RFC PATCH 00/30] Code tagging framework and applications Peter Zijlstra
2022-08-31  8:42   ` Kent Overstreet
2022-08-31 10:19     ` Mel Gorman
2022-08-31 10:47       ` Michal Hocko
2022-08-31 15:28         ` Suren Baghdasaryan
2022-08-31 16:48           ` Suren Baghdasaryan
2022-08-31 19:01         ` Kent Overstreet
2022-08-31 20:56           ` Yosry Ahmed
2022-08-31 21:38             ` Suren Baghdasaryan
2022-09-01 22:27             ` Roman Gushchin
2022-09-01 22:37               ` Kent Overstreet
2022-09-01 22:53                 ` Roman Gushchin
2022-09-01 23:36                   ` Suren Baghdasaryan
2022-09-02  0:17                   ` Kent Overstreet
2022-09-02  1:04                     ` Roman Gushchin
2022-09-02  1:16                       ` Kent Overstreet
2022-09-02 12:02                       ` Jens Axboe
2022-09-02 19:48                         ` Kent Overstreet
2022-09-02 19:53                           ` Jens Axboe
2022-09-02 20:05                             ` Kent Overstreet
2022-09-02 20:23                               ` Jens Axboe
2022-09-01  7:18           ` Michal Hocko
2022-09-01 15:33             ` Suren Baghdasaryan
2022-09-01 19:15               ` Michal Hocko
2022-09-01 19:39                 ` Suren Baghdasaryan
2022-09-01 20:15                   ` Kent Overstreet
2022-09-05  8:49                     ` Michal Hocko
2022-09-05 23:46                       ` Kent Overstreet
2022-09-06  7:23                         ` Michal Hocko
2022-09-06 18:20                           ` Kent Overstreet
2022-09-07 11:00                             ` Michal Hocko
2022-09-07 13:04                               ` Kent Overstreet
2022-09-07 13:45                                 ` Steven Rostedt
2022-09-08  6:35                                   ` Kent Overstreet
2022-09-08  6:49                                     ` Suren Baghdasaryan
2022-09-08  7:07                                       ` Kent Overstreet
2022-09-08  7:12                                     ` Michal Hocko
2022-09-08  7:29                                       ` Kent Overstreet
2022-09-08  7:47                                         ` Michal Hocko
2022-09-05  1:32                 ` Suren Baghdasaryan
2022-09-05  8:12                   ` Michal Hocko
2022-09-05  8:58                     ` Marco Elver
2022-09-05 18:07                       ` Suren Baghdasaryan
2022-09-05 18:03                     ` Suren Baghdasaryan
2022-09-06  8:01                       ` Michal Hocko
2022-09-06 15:35                         ` Suren Baghdasaryan
2022-09-05 15:07                   ` Steven Rostedt
2022-09-05 18:08                     ` Suren Baghdasaryan
2022-09-05 20:42                       ` Kent Overstreet
2022-09-05 22:16                         ` Steven Rostedt
2022-09-05 23:50                           ` Kent Overstreet
2022-09-01  8:05           ` David Hildenbrand
2022-09-01 14:23             ` Kent Overstreet
2022-09-01 15:07               ` David Hildenbrand
2022-09-01 15:39                 ` Suren Baghdasaryan
2022-09-01 15:48                 ` Kent Overstreet
2022-08-31 15:59       ` Kent Overstreet
2022-09-01  7:05         ` Peter Zijlstra
2022-09-01  7:36           ` Daniel Bristot de Oliveira
2022-09-01  7:42           ` Peter Zijlstra
2022-09-01 11:05         ` Mel Gorman
2022-09-01 16:31           ` Kent Overstreet
2022-09-01  7:00       ` Peter Zijlstra
2022-09-01 14:29         ` Kent Overstreet
2022-09-05 18:44       ` Nadav Amit
2022-09-05 19:16         ` Steven Rostedt
2022-09-01  4:52 ` Oscar Salvador
2022-09-01  5:05   ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpECU8NsC_kUSE7ef33_HUkZP5S2rEbxOvfnmM2Qb4TKBA@mail.gmail.com \
    --to=surenb@google.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=axboe@kernel.dk \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=changbin.du@intel.com \
    --cc=cl@linux.com \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=glider@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=io-uring@vger.kernel.org \
    --cc=iommu@lists.linux.dev \
    --cc=jbaron@akamai.com \
    --cc=juri.lelli@redhat.com \
    --cc=kaleshsingh@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=kent.overstreet@linux.dev \
    --cc=kernel-team@android.com \
    --cc=ldufour@linux.ibm.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-modules@vger.kernel.org \
    --cc=masahiroy@kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=minchan@google.com \
    --cc=nathan@kernel.org \
    --cc=penberg@kernel.org \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=rostedt@goodmis.org \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=vbabka@suse.cz \
    --cc=vincent.guittot@linaro.org \
    --cc=void@manifault.com \
    --cc=vschneid@redhat.com \
    --cc=willy@infradead.org \
    --cc=xen-devel@lists.xenproject.org \
    --cc=ytcoode@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).