From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753524AbdLNRPr (ORCPT ); Thu, 14 Dec 2017 12:15:47 -0500 Received: from upbd19pa10.eemsg.mail.mil ([214.24.27.85]:35286 "EHLO upbd19pa10.eemsg.mail.mil" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753289AbdLNRPp (ORCPT ); Thu, 14 Dec 2017 12:15:45 -0500 X-IronPort-AV: E=Sophos;i="5.45,400,1508803200"; d="scan'208";a="6808273" IronPort-PHdr: =?us-ascii?q?9a23=3AugVAShWMA4xUOV0lo9L5vXW+7EfV8LGtZVwlr6E/?= =?us-ascii?q?grcLSJyIuqrYZRCOtKdThVPEFb/W9+hDw7KP9fy4ACpbu93Q7DgrS99lb1c9k8?= =?us-ascii?q?IYnggtUoauKHbQC7rUVRE8B9lIT1R//nu2YgB/Ecf6YEDO8DXptWZBUhrwOhBo?= =?us-ascii?q?KevrB4Xck9q41/yo+53Ufg5EmCexbal9IRmqsAndrMcbjZVtJqs1xRbCv2dFdf?= =?us-ascii?q?lRyW50P1yYggzy5t23/J5t8iRQv+wu+stdWqjkfKo2UKJVAi0+P286+MPkux/D?= =?us-ascii?q?TRCS5nQHSWUZjgBIAwne4x7kWJr6rzb3ufB82CmeOs32UKw0VDG/5KplVBPklC?= =?us-ascii?q?EKPCMi/WrJlsJ/kr5UoBO5pxx+3YHUZp2VNOFjda/ZZN8WWHZNUtpUWyFHDIy8?= =?us-ascii?q?dY8PBPcfM+heoYf2ul8CoQKgCQWwAe/izCJDiH3r0q0gy+kvEhzI0gw+EdwAsn?= =?us-ascii?q?vUotL1O7sVX++6w6fF1inDYvBM1Dvh8oXEbhIsrPeRVrxwa8rRzkwvGhvYgFWM?= =?us-ascii?q?t4PlJzOV2foLs2OG8uRgUPigi2ojqw5vojmk28AhipLUiYIO0V3E6SV4z5o1Jd?= =?us-ascii?q?2/UkJ7Z8WkH4FKuyGVMIt2XNovTmd1syg50r0LoYO3cScFxZg9xxPTduaLf5aH?= =?us-ascii?q?7x79TuqdPDF1j29/dr2lnRa9602gx/X5VsmzzVlFsDJIksLJtnARzxzT7dWHSu?= =?us-ascii?q?dl8kehxzmP0wfT5/lYIU8uj6rbKoMhwqUqmpoPsUXMAi/2mELsgK+Qakok4fSn?= =?us-ascii?q?5/7iYrXnop+QL450igfgPaQygsGzHOs1PwcUU2Wb5OiwzqPv8ELnTLlQk/E6iq?= =?us-ascii?q?zZv4rbJcQfqK65GQhV0oM75hakEjimy88VnWUHLV1ZeBKHiJLlO1fVIP/iF/u/?= =?us-ascii?q?jFOskClzy/DcIrLhGonNLmTEkLr5ebZ96khcyBc8zNxG5JJbFKsBIPTtVU/1r9?= =?us-ascii?q?HYEBA5PBKuw+r9C9VyyJkeWWSRDa+dKq/StkWI5u03KemWeIAVoCr9K+Qi5/P2?= =?us-ascii?q?lX85nUUSfbS13ZsNc3+3BO9rI1+HbnXxgtcOC3sKshAiQ+ztjV2ISSRTaGqqX6?= =?us-ascii?q?Ig+jE7D5qrDYPdRoC3mrOOxzm0EYFNa2BcFF+DDHfoeJ+YW/sWdC2SJcphmCQe?= =?us-ascii?q?Vbe9U48hyQ2utAjixrV6IOvb4CkYtYnj1NVu/e3ciww99TxuAMSByW2CU2Z0nm?= =?us-ascii?q?YQTT8swK9/uVB9ykuE0aVgnfNYDcZc5+lIUgchLpPc1/Z1C8rzWgLaZteJTEyp?= =?us-ascii?q?Tcm4Dj0rSdIx2dAOaV5nG9q+lhDDwzaqA7gNmryTHpM076bc0mPpJ8ln1nbG0L?= =?us-ascii?q?Atj1whQstIL22pmLRz+BTUB47Mi0+ZjbqldbwA3C7R82eO1WqPs1teUA5/U6XF?= =?us-ascii?q?XHAfZkzQrdT2+0/PVL+uCak9PQpP18GCK7FGZcHujVVDXP3jIsjRY3qtm2esAh?= =?us-ascii?q?aF3q+DY5Dxe2oD3CTQE1MEnBwT/XmcKAg+CCOhrHzEDDB3CV3geVng/vV5qHO+?= =?us-ascii?q?HQcIyFShZlZsxvKO8R4cmPKYRulbirkNoyowgy5/HF+g0dbbEZ+Lrkxqe6AKJZ?= =?us-ascii?q?ss6VFI12PZsApydsixJqZthF8edAJ45mvuyhxoB4QGms8v+jdihhJ7NKawyFpc?= =?us-ascii?q?c3ad2pfqN/vcLWy4tES3YrPS8knXzdLT/6AI8vl+oFLm6kXhXEc6+m9myPFN2m?= =?us-ascii?q?Gd/Y3OBQEfF5XrXQx/oxVirqvbeQEl7pnVzmVoOKK59DjY1IRtTKE+xxKhecpP?= =?us-ascii?q?GL+VHw/1VcsBDo6hL/Jg0wyyYxYFOv1C3LIlNMOhMf2d0eikO/g22HqKhGJG7Y?= =?us-ascii?q?Q18Aqz/i5nSqac04kMx+qY9hGKWzf1kBGqtcWh3chgeDIbBSKB0yHuCYUZMqls?= =?us-ascii?q?Z48BBGyGOcC7xtxiwZXqXigcvGauG0lO/Ma0ZQCYZlf9lVlI0U0KvWatkAOiwj?= =?us-ascii?q?B0mi1vpa2ain/g2ePnISEbN3ZLSW8qtlLlJYy5nphOR0SzRxQ4nxuio0Dhzu5U?= =?us-ascii?q?o7opfDqbeltBYyWjdzIqaaC3rLfXJpcVsJ4=3D?= X-IPAS-Result: =?us-ascii?q?A2D7AQBdsTJa/wHyM5BdGQEBAQEBAQEBAQEBAQcBAQEBAYM?= =?us-ascii?q?SLIFahCmZJ0ABAQEBAQEGgTGSIIcLhUUChHdDFAEBAQEBAQEBAQFqKII4JAGCR?= =?us-ascii?q?wEFIwQLAUYQCxgCAh8HAgJXBgGIHIITDakfgW06hBYBAYZGAQEBAQEFAQEBAQE?= =?us-ascii?q?BASGBD4JWgg6BDoIxgyuFCYMpgmMFkymPfItuiTyTbEiXYDYigU4qCAIYCCEPO?= =?us-ascii?q?oIqglEBHBmBEwFYI4hGgkcBAQE?= Message-ID: <1513271755.18008.11.camel@tycho.nsa.gov> Subject: Re: [BUG]kernel softlockup due to sidtab_search_context run for long time because of too many sidtab context node From: Stephen Smalley To: Casey Schaufler , yangjihong , "paul@paul-moore.com" , "eparis@parisplace.org" , "selinux@tycho.nsa.gov" , Daniel J Walsh , Lukas Vrabec , Petr Lautrbach Cc: "linux-kernel@vger.kernel.org" Date: Thu, 14 Dec 2017 12:15:55 -0500 In-Reply-To: <79e41bd9-2570-7386-d462-d242a18fb786@schaufler-ca.com> References: <1BC3DBD98AD61A4A9B2569BC1C0B4437D5D1F3@DGGEMM506-MBS.china.huawei.com> <1513178296.19161.8.camel@tycho.nsa.gov> <23c51943-51a4-4478-760f-375d02caa39b@schaufler-ca.com> <1513269771.18008.6.camel@tycho.nsa.gov> <79e41bd9-2570-7386-d462-d242a18fb786@schaufler-ca.com> Organization: National Security Agency Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6 (3.22.6-2.fc25) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2017-12-14 at 09:00 -0800, Casey Schaufler wrote: > On 12/14/2017 8:42 AM, Stephen Smalley wrote: > > On Thu, 2017-12-14 at 08:18 -0800, Casey Schaufler wrote: > > > On 12/13/2017 7:18 AM, Stephen Smalley wrote: > > > > On Wed, 2017-12-13 at 09:25 +0000, yangjihong wrote: > > > > > Hello,  > > > > > > > > > > I am doing stressing testing on 3.10 kernel(centos 7.4), to > > > > > constantly starting numbers of docker ontainers with selinux > > > > > enabled, > > > > > and after about 2 days, the kernel softlockup panic: > > > > >    [] sched_show_task+0xb8/0x120 > > > > >  [] show_lock_info+0x20f/0x3a0 > > > > >  [] watchdog_timer_fn+0x1da/0x2f0 > > > > >  [] ? > > > > > watchdog_enable_all_cpus.part.4+0x40/0x40 > > > > >  [] __hrtimer_run_queues+0xd2/0x260 > > > > >  [] hrtimer_interrupt+0xb0/0x1e0 > > > > >  [] local_apic_timer_interrupt+0x37/0x60 > > > > >  [] smp_apic_timer_interrupt+0x50/0x140 > > > > >  [] apic_timer_interrupt+0x6d/0x80 > > > > >    [] ? > > > > > sidtab_context_to_sid+0xb3/0x480 > > > > >  [] ? sidtab_context_to_sid+0x110/0x480 > > > > >  [] ? mls_setup_user_range+0x145/0x250 > > > > >  [] security_get_user_sids+0x3f7/0x550 > > > > >  [] sel_write_user+0x12b/0x210 > > > > >  [] ? sel_write_member+0x200/0x200 > > > > >  [] selinux_transaction_write+0x48/0x80 > > > > >  [] vfs_write+0xbd/0x1e0 > > > > >  [] SyS_write+0x7f/0xe0 > > > > >  [] system_call_fastpath+0x16/0x1b > > > > > > > > > > My opinion: > > > > > when the docker container starts, it would mount overlay > > > > > filesystem > > > > > with different selinux context, mount point such as:  > > > > > overlay on > > > > > /var/lib/docker/overlay2/be3ef517730d92fc4530e0e952eae4f6cb0f > > > > > 07b4 > > > > > bc32 > > > > > 6cb07495ca08fc9ddb66/merged type overlay > > > > > (rw,relatime,context="system_u:object_r:svirt_sandbox_file_t: > > > > > s0:c > > > > > 414, > > > > > c873",lowerdir=/var/lib/docker/overlay2/l/Z4U7WY6ASNV5CFWLADP > > > > > ARHH > > > > > WY7: > > > > > /var/lib/docker/overlay2/l/V2S3HOKEFEOQLHBVAL5WLA3YLS:/var/li > > > > > b/do > > > > > cker > > > > > /overlay2/l/46YGYO474KLOULZGDSZDW2JPRI,upperdir=/var/lib/dock > > > > > er/o > > > > > verl > > > > > ay2/be3ef517730d92fc4530e0e952eae4f6cb0f07b4bc326cb07495ca08f > > > > > c9dd > > > > > b66/ > > > > > diff,workdir=/var/lib/docker/overlay2/be3ef517730d92fc4530e0e > > > > > 952e > > > > > ae4f > > > > > 6cb0f07b4bc326cb07495ca08fc9ddb66/work) > > > > > shm on > > > > > /var/lib/docker/containers/9fd65e177d2132011d7b422755793449c9 > > > > > 1327 > > > > > ca57 > > > > > 7b8f5d9d6a4adf218d4876/shm type tmpfs > > > > > (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:s > > > > > virt > > > > > _san > > > > > dbox_file_t:s0:c414,c873",size=65536k) > > > > > overlay on > > > > > /var/lib/docker/overlay2/38d1544d080145c7d76150530d0255991dfb > > > > > 7258 > > > > > cbca > > > > > 14ff6d165b94353eefab/merged type overlay > > > > > (rw,relatime,context="system_u:object_r:svirt_sandbox_file_t: > > > > > s0:c > > > > > 431, > > > > > c651",lowerdir=/var/lib/docker/overlay2/l/3MQQXB4UCLFB7ANVRHP > > > > > AVRC > > > > > RSS: > > > > > /var/lib/docker/overlay2/l/46YGYO474KLOULZGDSZDW2JPRI,upperdi > > > > > r=/v > > > > > ar/l > > > > > ib/docker/overlay2/38d1544d080145c7d76150530d0255991dfb7258cb > > > > > ca14 > > > > > ff6d > > > > > 165b94353eefab/diff,workdir=/var/lib/docker/overlay2/38d1544d > > > > > 0801 > > > > > 45c7 > > > > > d76150530d0255991dfb7258cbca14ff6d165b94353eefab/work) > > > > > shm on > > > > > /var/lib/docker/containers/662e7f798fc08b09eae0f0f944537a4bce > > > > > dc1d > > > > > cf05 > > > > > a65866458523ffd4a71614/shm type tmpfs > > > > > (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:s > > > > > virt > > > > > _san > > > > > dbox_file_t:s0:c431,c651",size=65536k) > > > > > > > > > > sidtab_search_context check the context whether is in the > > > > > sidtab > > > > > list, If not found, a new node is generated and insert into > > > > > the > > > > > list, > > > > > As the number of containers is increasing,  context nodes are > > > > > also > > > > > more and more, we tested the final number of nodes reached > > > > > 300,000 +, > > > > > sidtab_context_to_sid runtime needs 100-200ms, which will > > > > > lead to > > > > > the > > > > > system softlockup. > > > > > > > > > > Is this a selinux bug? When filesystem umount, why context > > > > > node > > > > > is > > > > > not deleted?  I cannot find the relevant function to delete > > > > > the > > > > > node > > > > > in sidtab.c > > > > > > > > > > Thanks for reading and looking forward to your reply. > > > > > > > > So, does docker just keep allocating a unique category set for > > > > every > > > > new container, never reusing them even if the container is > > > > destroyed?  > > > > That would be a bug in docker IMHO.  Or are you creating an > > > > unbounded > > > > number of containers and never destroying the older ones? > > > > > > You can't reuse the security context. A process in ContainerA > > > sends > > > a labeled packet to MachineB. ContainerA goes away and its > > > context > > > is recycled in ContainerC. MachineB responds some time later, > > > again > > > with a labeled packet. ContainerC gets information intended for > > > ContainerA, and uses the information to take over the Elbonian > > > government. > > > > Docker isn't using labeled networking (nor is anything else by > > default; > > it is only enabled if explicitly configured). > > If labeled networking weren't an issue we'd have full security > module stacking by now. Yes, it's an edge case. If you want to > use labeled NFS or a local filesystem that gets mounted in each > container (don't tell me that nobody would do that) you've got > the same problem. Even if someone were to configure labeled networking, Docker is not presently relying on that or SELinux network enforcement for any security properties, so it really doesn't matter. And if they wanted to do that, they'd have to coordinate category assignments across all systems involved, for which no facility exists AFAIK. If you have two docker instances running on different hosts, I'd wager that they can hand out the same category sets today to different containers. With respect to labeled NFS, that's also not the default for nfs mounts, so again it is a custom configuration and Docker isn't relying on it for any guarantees today. For local filesystems, they would normally be context-mounted or using genfscon rather than xattrs in order to be accessible to the container, thus no persistent storage of the category sets. Certainly docker could provide an option to not reuse category sets, but making that the default is not sane and just guarantees exhaustion of the SID and context space (just create and tear down lots of containers every day or more frequently). > > > > > On the selinux userspace side, we'd also like to eliminate the > > > > use > > > > of > > > > /sys/fs/selinux/user (sel_write_user -> security_get_user_sids) > > > > entirely, which is what triggered this for you. > > > > > > > > We cannot currently delete a sidtab node because we have no way > > > > of > > > > knowing if there are any lingering references to the > > > > SID.  Fixing > > > > that > > > > would require reference-counted SIDs, which goes beyond just > > > > SELinux > > > > since SIDs/secids are returned by LSM hooks and cached in other > > > > kernel > > > > data structures. > > > > > > You could delete a sidtab node. The code already deals with > > > unfindable > > > SIDs. The issue is that eventually you run out of SIDs. Then you > > > are > > > forced to recycle SIDs, which leads to the overthrow of the > > > Elbonian > > > government. > > > > We don't know when we can safely delete a sidtab node since SIDs > > aren't > > reference counted and we can't know whether it is still in use > > somewhere in the kernel.  Doing so prematurely would lead to the > > SID > > being remapped to the unlabeled context, and then likely to > > undesired > > denials. > > I would suggest that if you delete a sidtab node and someone > comes along later and tries to use it that denial is exactly > what you would desire. I don't see any other rational action. Yes, if we know that the SID wasn't in use at the time we tore it down. But if we're just randomly deleting sidtab entries based on age or something (since we have no reference count), we'll almost certainly encounter situations where a SID hasn't been accessed in a long time but is still being legitimately cached somewhere. Just a file that hasn't been accessed in a while might have that SID still cached in its inode security blob, or anywhere else. > > > > > sidtab_search_context() could no doubt be optimized for the > > > > negative > > > > case; there was an earlier optimization for the positive case > > > > by > > > > adding > > > > a cache to sidtab_context_to_sid() prior to calling it.  It's a > > > > reverse > > > > lookup in the sidtab. > > > > > > This seems like a bad idea. > > > > Not sure what you mean, but it can certainly be changed to at least > > use > > a hash table for these reverse lookups. > > > >