From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78573C352A3 for ; Thu, 13 Feb 2020 04:39:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5977921734 for ; Thu, 13 Feb 2020 04:39:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729431AbgBMEjw (ORCPT ); Wed, 12 Feb 2020 23:39:52 -0500 Received: from out02.mta.xmission.com ([166.70.13.232]:49932 "EHLO out02.mta.xmission.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727076AbgBMEjv (ORCPT ); Wed, 12 Feb 2020 23:39:51 -0500 Received: from in01.mta.xmission.com ([166.70.13.51]) by out02.mta.xmission.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1j26Hz-0000O1-MU; Wed, 12 Feb 2020 21:39:47 -0700 Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95] helo=x220.xmission.com) by in01.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.87) (envelope-from ) id 1j26Hy-0006WH-VI; Wed, 12 Feb 2020 21:39:47 -0700 From: ebiederm@xmission.com (Eric W. Biederman) To: Linus Torvalds Cc: Al Viro , LKML , Kernel Hardening , Linux API , Linux FS Devel , Linux Security Module , Akinobu Mita , Alexey Dobriyan , Andrew Morton , Andy Lutomirski , Daniel Micay , Djalal Harouni , "Dmitry V . Levin" , Greg Kroah-Hartman , Ingo Molnar , "J . Bruce Fields" , Jeff Layton , Jonathan Corbet , Kees Cook , Oleg Nesterov , Solar Designer References: <20200210150519.538333-8-gladkov.alexey@gmail.com> <87v9odlxbr.fsf@x220.int.ebiederm.org> <20200212144921.sykucj4mekcziicz@comp-core-i7-2640m-0182e6> <87tv3vkg1a.fsf@x220.int.ebiederm.org> <87v9obipk9.fsf@x220.int.ebiederm.org> <20200212200335.GO23230@ZenIV.linux.org.uk> <20200212203833.GQ23230@ZenIV.linux.org.uk> <20200212204124.GR23230@ZenIV.linux.org.uk> <87lfp7h422.fsf@x220.int.ebiederm.org> Date: Wed, 12 Feb 2020 22:37:52 -0600 In-Reply-To: (Linus Torvalds's message of "Wed, 12 Feb 2020 16:48:14 -0800") Message-ID: <87pnejf6fz.fsf@x220.int.ebiederm.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-XM-SPF: eid=1j26Hy-0006WH-VI;;;mid=<87pnejf6fz.fsf@x220.int.ebiederm.org>;;;hst=in01.mta.xmission.com;;;ip=68.227.160.95;;;frm=ebiederm@xmission.com;;;spf=neutral X-XM-AID: U2FsdGVkX184mp0tBu1N+5TIPW3et7RBWpPrbgy8SaE= X-SA-Exim-Connect-IP: 68.227.160.95 X-SA-Exim-Mail-From: ebiederm@xmission.com Subject: Re: [PATCH v8 07/11] proc: flush task dcache entries from all procfs instances X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: Linus Torvalds writes: > On Wed, Feb 12, 2020 at 1:48 PM Eric W. Biederman wrote: >> >> The good news is proc_flush_task isn't exactly called from process exit. >> proc_flush_task is called during zombie clean up. AKA release_task. > > Yeah, that at least avoids some of the nasty locking while dying debug problems. > > But the one I was more worried about was actually the lock contention > issue with lots of processes. The lock is basically a single global > lock in many situations - yes, it's technically per-ns, but in a lot > of cases you really only have one namespace anyway. > > And we've had problems with global locks in this area before, notably > the one you call out: > >> Further after proc_flush_task does it's thing the code goes >> and does "write_lock_irq(&task_list_lock);" > > Yeah, so it's not introducing a new issue, but it is potentially > making something we already know is bad even worse. > >> What would be downside of having a mutex for a list of proc superblocks? >> A mutex that is taken for both reading and writing the list. > > That's what the original patch actually was, and I was hoping we could > avoid that thing. > > An rwsem would be possibly better, since most cases by far are likely > about reading. > > And yes, I'm very aware of the task_list_lock, but it's literally why > I don't want to make a new one. > > I'm _hoping_ we can some day come up with something better than > task_list_lock. Yes. I understand that. I occassionally play with ideas, and converted all of proc to rcu to help with situation but I haven't come up with anything clearly better. All of this is why I was really hoping we could have a change in strategy and see if we can make the shrinker be able to better prune proc inodes. I think I have an alternate idea that could work. Add some extra code into proc_task_readdir, that would look for dentries that no longer point to tasks and d_invalidate them. With the same logic probably being called from a few more places as well like proc_pid_readdir, proc_task_lookup, and proc_pid_lookup. We could even optimize it and have a process died flag we set in the superblock. That would would batch up the freeing work until the next time someone reads from proc in a way that would create more dentries. So it would prevent dentries from reaped zombies from growing without bound. Hmm. Given the existence of proc_fill_cache it would really be a good idea if readdir and lookup performed some of the freeing work as well. As on readdir we always populate the dcache for all of the directory entries. I am booked solid for the next little while but if no one beats me to it I will try and code something like that up where at least readdir looks for and invalidates stale dentries. Eric