linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexey Dobriyan <adobriyan@gmail.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: dhowells@redhat.com, Alexander Viro <viro@zeniv.linux.org.uk>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] proc: fixup PDE allocation bloat
Date: Fri, 17 Aug 2018 16:28:36 +0300	[thread overview]
Message-ID: <20180817132836.GA18921@avx2> (raw)
In-Reply-To: <CALvZod7HXbR1hQ-cZ1=n8as7wBkNMC6T0TbRhpgXNTKviJxgCg@mail.gmail.com>

On Thu, Jul 19, 2018 at 05:06:55PM -0700, Shakeel Butt wrote:
> On Sun, Jun 17, 2018 at 2:57 PM Alexey Dobriyan <adobriyan@gmail.com> wrote:
> >
> > commit 24074a35c5c975c94cd9691ae962855333aac47f
> > ("proc: Make inline name size calculation automatic")
> > started to put PDE allocations into kmalloc-256 which is unnecessary as
> > ~40 character names are very rare.
> >
> > Put allocation back into kmalloc-192 cache for 64-bit non-debug builds.
> >
> > Put BUILD_BUG_ON to know when PDE size is gotten out of control.
> >
> > Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
> > ---
> >
> >  fs/proc/inode.c    |    6 ++++--
> >  fs/proc/internal.h |   17 +++++++----------
> >  2 files changed, 11 insertions(+), 12 deletions(-)
> >
> > --- a/fs/proc/inode.c
> > +++ b/fs/proc/inode.c
> > @@ -105,8 +105,10 @@ void __init proc_init_kmemcache(void)
> >                 kmem_cache_create("pde_opener", sizeof(struct pde_opener), 0,
> >                                   SLAB_ACCOUNT|SLAB_PANIC, NULL);
> >         proc_dir_entry_cache = kmem_cache_create_usercopy(
> > -               "proc_dir_entry", SIZEOF_PDE_SLOT, 0, SLAB_PANIC,
> > -               OFFSETOF_PDE_NAME, SIZEOF_PDE_INLINE_NAME, NULL);
> > +               "proc_dir_entry", SIZEOF_PDE, 0, SLAB_PANIC,
> 
> Hi Alexey, can you comment if proc_dir_entry_cache should or shouldn't
> have SLAB_ACCOUNT flag?

It should not (but see below):

SLAB_ACCOUNT is for allocations which can be done by userspace directly:
open(2) directly allocates "struct file".

But /proc entries aren't like that: say, /proc/cpuinfo is created by kernel
and userspace can't do anything about it.

Some subsystems create /proc entries based on userspace actions and
those aren't related to hardware (example: xt_hashlimit.c) but those are
few so kernel doesn't bother accounting those.

Or in other words: user can't mkdir(1) and touch(1) and ln(1) inside /proc
at will and therefore PDEs aren't accounted.

      reply	other threads:[~2018-08-17 13:28 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-14 20:09 [PATCH] proc: Make inline name size calculation automatic Alexey Dobriyan
2018-06-14 20:30 ` David Howells
2018-06-15 17:15   ` Alexey Dobriyan
2018-06-17 21:57   ` [PATCH] proc: fixup PDE allocation bloat Alexey Dobriyan
2018-07-20  0:06     ` Shakeel Butt
2018-08-17 13:28       ` Alexey Dobriyan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180817132836.GA18921@avx2 \
    --to=adobriyan@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=dhowells@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shakeelb@google.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).