All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Johansen <john.johansen@canonical.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-security-module@vger.kernel.org,
	James Morris <jmorris@namei.org>,
	"Serge E. Hallyn" <serge@hallyn.com>,
	tglx@linutronix.de
Subject: Re: [PATCH 1/2] apparmor: Use a memory pool instead per-CPU caches
Date: Wed, 1 May 2019 14:29:17 -0700	[thread overview]
Message-ID: <02d7772b-5d06-1c32-b089-454547fbe08b@canonical.com> (raw)
In-Reply-To: <20190430144725.gd6r3aketxuqdyir@linutronix.de>

On 4/30/19 7:47 AM, Sebastian Andrzej Siewior wrote:
> On 2019-04-28 16:56:59 [-0700], John Johansen wrote:
>> So digging into why the history of the per cpu buffers in apparmor.
>> We used to do buffer allocations via kmalloc and there were a few reasons
>> for the switch 
>>
>> * speed/lockless: speaks for it self, mediation is already slow enough
> 
> it is shared among all CPUs but it is a small/quick operation to
> add/return a buffer.
> 
I wouldn't exactly call taking a lock speedy. Getting an available buffer
or returning it is indeed quick. The allocation fall back not so much.


>> * some buffer allocations had to be done with GFP_ATOMIC, making them
>>   more likely to fail. Since we fail closed that means failure would
>>   block access. This actually became a serious problem in a couple
>>   places. Switching to per cpu buffers and blocking pre-empt was
>>   the solution.
> 
> GFP_KERNEL is allowed to use IO/SWAP and ATOMIC has emergency pools. The
> new approach won't return a NULL pointer, simply spin to either allocate
> new memory or get one which was just returned.
> 

yeah, I am not really a fan of a potential infinite loop trying to allocate
memory. It may be worth retrying once or twice but potentially infinitely
spinning on failed allocation really isn't acceptable.

>> * in heavy use cases we would see a lot of buffers being allocated
>>   and freed. Which resulted in locking slow downs and also buffer
>>   allocation failures. So having the buffers preallocated allowed us
>>   to bound this potential problem.
>>
>> This was all 6 years ago. Going to a mem pool certainly could help,
>> reduce the memory foot print, and would definitely help with
>> preempt/real time kernels.
>>
>> A big concern with this patchset is reverting back to GFP_KERNEL
>> for everything. We definitely were getting failures due to allocations
>> in atomic context. There have been lots of changes in the kernel over
>> the last six years so it possible these cases don't exist anymore. I
>> went through and built some kernels with this patchset and have run
>> through some testing without tripping that problem but I don't think
>> it has seen enough testing yet.
> 
> Do you want apply #1 now and #2 later? I audited the ATOMIC->KERNEL
> changes manually and I didn't see any atomic context. It looked like the
> only reason for ATOMIC was the preempt_disable() due to the memory pool.
> 

Indeed most if not all (I'd have to dig to be sure) the changes made in #2
were original done because of the move to the per cpu buffers and blocking
pre-emption.

The problem was with the allocation of the buffer needing to be GFP_ATOMIC
some times.

  reply	other threads:[~2019-05-01 21:29 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-05 13:34 [PATCH 1/2] apparmor: Use a memory pool instead per-CPU caches Sebastian Andrzej Siewior
2019-04-05 13:34 ` [PATCH 2/2] apparmor: Switch to GFP_KERNEL where possible Sebastian Andrzej Siewior
2019-05-07 19:57   ` John Johansen
2019-04-15 10:50 ` [PATCH 1/2] apparmor: Use a memory pool instead per-CPU caches Sebastian Andrzej Siewior
2019-04-28 23:56 ` John Johansen
2019-04-30 14:47   ` Sebastian Andrzej Siewior
2019-05-01 21:29     ` John Johansen [this message]
2019-05-02 10:51       ` Sebastian Andrzej Siewior
2019-05-02 13:17         ` Tetsuo Handa
2019-05-02 13:47           ` Sebastian Andrzej Siewior
2019-05-02 14:10             ` Tetsuo Handa
2019-05-03 11:48               ` [PATCH v2] " Sebastian Andrzej Siewior
2019-05-03 11:51                 ` [PATCH v3] " Sebastian Andrzej Siewior
2019-05-03 12:41                   ` Tetsuo Handa
2019-05-03 14:12                     ` [PATCH v4] " Sebastian Andrzej Siewior
2019-05-07 19:57                       ` John Johansen
2019-10-02  8:59                         ` Sebastian Andrzej Siewior
2019-10-02 15:47                           ` John Johansen
2019-10-02 15:52                             ` Sebastian Andrzej Siewior
2019-05-02 19:33         ` [PATCH 1/2] " John Johansen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=02d7772b-5d06-1c32-b089-454547fbe08b@canonical.com \
    --to=john.johansen@canonical.com \
    --cc=bigeasy@linutronix.de \
    --cc=jmorris@namei.org \
    --cc=linux-security-module@vger.kernel.org \
    --cc=serge@hallyn.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.