From: Stephen Smalley <sds@tycho.nsa.gov>
To: rsiddoji@codeaurora.org, selinux@vger.kernel.org
Cc: paul@paul-moore.com, linux-security-module@vger.kernel.org
Subject: Re: Looks like issue in handling active_nodes count in 4.19 kernel .
Date: Wed, 11 Dec 2019 09:47:32 -0500 [thread overview]
Message-ID: <d6081414-613f-fdb8-8dcd-9ebf6a3baa27@tycho.nsa.gov> (raw)
In-Reply-To: <7b047966-33c0-de62-b10f-047819890337@tycho.nsa.gov>
On 12/11/19 9:37 AM, Stephen Smalley wrote:
> On 12/9/19 1:30 PM, rsiddoji@codeaurora.org wrote:
>> Thanks for quick response , yes it will be helpful if you can raise
>> the change .
>> On the second issue in avc_alloc_node we are trying to check the
>> slot status as active_nodes > 512 ( default )
>> Where checking the occupancy should be corrected as active_nodes
>> > 80% of slots occupied or 16*512 or
>> May be we need to use a different logic .
>
> Are you seeing an actual problem with this in practice, and if so, what
> exactly is it that you are seeing and do you have a reproducer?
BTW, on Linux distributions, there is an avcstat(8) utility that can be
used to monitor the AVC statistics, or you can directly read the stats
from the kernel via /sys/fs/selinux/avc/cache_stats
>
>>
>>> /*@ static struct avc_node *avc_alloc_node(struct selinux_avc *avc) */
>>>
>>> if (atomic_inc_return(&avc->avc_cache.active_nodes) >
>>> avc->avc_cache_threshold) // default threshold is 512
>>> avc_reclaim_node(avc);
>>>
>>
>> Regards,
>> Ravi
>>
>> -----Original Message-----
>> From: selinux-owner@vger.kernel.org <selinux-owner@vger.kernel.org> On
>> Behalf Of Stephen Smalley
>> Sent: Monday, December 9, 2019 11:35 PM
>> To: rsiddoji@codeaurora.org; selinux@vger.kernel.org
>> Cc: paul@paul-moore.com; linux-security-module@vger.kernel.org
>> Subject: Re: Looks like issue in handling active_nodes count in 4.19
>> kernel .
>>
>> On 12/9/19 10:55 AM, rsiddoji@codeaurora.org wrote:
>>> Hi team ,
>>> Looks like we have issue in handling the "active_nodes" count in the
>>> Selinux - avc.c file.
>>> Where avc_cache.active_nodes increase more than slot array and code
>>> frequency calling of avc_reclaim_node() from avc_alloc_node() ;
>>>
>>> Where following are the 2 instance which seem to possible culprits
>>> which are seen on 4.19 kernel . Can you comment if my understand is
>>> wrong.
>>>
>>>
>>> #1. if we see the active_nodes count is incremented in
>>> avc_alloc_node
>>> (avc) which is called in avc_insert()
>>> Where if the code take failure path on avc_xperms_populate the code
>>> will not decrement this counter .
>>>
>>>
>>> static struct avc_node *avc_insert(struct selinux_avc *avc,
>>> u32 ssid, u32 tsid, u16 tclass,
>>> struct av_decision *avd,
>>> ....
>>> node = avc_alloc_node(avc); //incremented here ....
>>> rc = avc_xperms_populate(node, xp_node); //
>>> possibilities of this getting failure is there .
>>> if (rc) {
>>> kmem_cache_free(avc_node_cachep, node); // but on
>>> failure we are
>>> not decrementing active_nodes ?
>>> return NULL;
>>> }
>>
>> I think you are correct; we should perhaps be calling avc_node_kill()
>> here as we do in an earlier error path?
>>
>>>
>>> #2. where it looks like the logic on comparing the active_nodes
>>> against avc_cache_threshold seems wired as the count of active nodes
>>> is always going to be
>>> more than 512 will may land in simply removing /calling
>>> avc_reclaim_node frequently much before the slots are full maybe we
>>> are not using cache at best ?
>>> we should be comparing with some high watermark ? or my
>>> understanding wrong ?
>>> /*@ static struct avc_node *avc_alloc_node(struct selinux_avc *avc) */
>>>
>>> if (atomic_inc_return(&avc->avc_cache.active_nodes) >
>>> avc->avc_cache_threshold) // default threshold is 512
>>> avc_reclaim_node(avc);
>>>
>>
>> Not entirely sure what you are asking here. avc_reclaim_node() should
>> reclaim multiple nodes up to AVC_CACHE_RECLAIM. Possibly that should
>> be configurable via selinuxfs too, and/or calculated from
>> avc_cache_threshold in some way?
>>
>> Were you interested in creating a patch to fix the first issue above
>> or looking to us to do so?
>>
>>
>>
>
next prev parent reply other threads:[~2019-12-11 14:47 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <0101016eeb5fdf43-18f58c0b-8670-43eb-ad08-60dae381f0fd-000000@us-west-2.amazonses.com>
2019-12-09 18:05 ` Looks like issue in handling active_nodes count in 4.19 kernel Stephen Smalley
2019-12-09 18:30 ` rsiddoji
2019-12-11 14:37 ` Stephen Smalley
2019-12-11 14:47 ` Stephen Smalley [this message]
2019-12-11 15:35 ` rsiddoji
[not found] ` <0101016ef59a2152-41e65aac-8784-4401-b20d-45b2852872d4-000000@us-west-2.amazonses.com>
2019-12-11 15:53 ` Stephen Smalley
2019-12-17 15:40 ` Ravi Kumar Siddojigari
2019-12-17 15:52 ` Stephen Smalley
2019-12-17 16:23 ` Stephen Smalley
2019-12-18 5:58 ` Ravi Kumar Siddojigari
2019-12-18 13:53 ` Stephen Smalley
2019-12-19 2:20 ` Paul Moore
2019-12-19 9:48 ` Ravi Kumar Siddojigari
2019-12-19 16:00 ` Stephen Smalley
2019-12-19 18:11 ` Paul Moore
2019-12-20 12:03 ` Ravi Kumar Siddojigari
2019-12-21 16:02 ` Paul Moore
2019-12-09 15:55 rsiddoji
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6081414-613f-fdb8-8dcd-9ebf6a3baa27@tycho.nsa.gov \
--to=sds@tycho.nsa.gov \
--cc=linux-security-module@vger.kernel.org \
--cc=paul@paul-moore.com \
--cc=rsiddoji@codeaurora.org \
--cc=selinux@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.