From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80E03C433EF for ; Fri, 10 Sep 2021 06:00:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B2BB6113E for ; Fri, 10 Sep 2021 06:00:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230478AbhIJGBi (ORCPT ); Fri, 10 Sep 2021 02:01:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:52754 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230417AbhIJGBe (ORCPT ); Fri, 10 Sep 2021 02:01:34 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id F13D36113E; Fri, 10 Sep 2021 06:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631253623; bh=+oBAtMoawlHInRv8YIXOyVPI7irf8Xkg/tet/t4c8lc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZyJ6ySZWKFHN2JnLx3GxlRZMUgBx7kgGbC9a0TdGLvUn6eAzCKJNLB6FYt8RbhTIC tH8uosog7Dpcb5qWhoF/PeWufLF5PH+ltZFt7ZbLniTMVzCu/jASqdCdDf2BnXcPy8 +hAT0lIPiWbzrj5ofnPY/DVNdlOW0VGKIuXZoflY= Date: Fri, 10 Sep 2021 08:00:00 +0200 From: Greg KH To: "taoyi.ty" Cc: tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, shanpeic@linux.alibaba.com Subject: Re: [RFC PATCH 1/2] add pinned flags for kernfs node Message-ID: References: <3d871bd0-dab5-c9ca-61b9-6aa137fa9fdf@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <3d871bd0-dab5-c9ca-61b9-6aa137fa9fdf@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 10, 2021 at 10:14:28AM +0800, taoyi.ty wrote: > > On 2021/9/8 下午8:35, Greg KH wrote: > > Why are kernfs changes needed for this? kernfs creation is not > > necessarily supposed to be "fast", what benchmark needs this type of > > change to require the addition of this complexity? > > The implementation of the cgroup pool should have nothing > > to do with kernfs, but during the development process, > > I found that when there is a background cpu load, it takes > > a very significant time for a process to get the mutex from > > being awakened to starting execution. > > To create 400 cgroups concurrently, if there is no background > > cpu load, it takes about 80ms, but if the cpu usage rate is > > 40%, it takes about 700ms. If you reduce > > sched_wakeup_granularity_ns, the time consumption will also > > be reduced. If you change mutex to spinlock, the situation > > will be very much improved. > > So to solve this problem, mutex should not be used. The > > cgroup pool relies on kernfs_rename which uses > > kernfs_mutex, so I need to bypass kernfs_mutex and > > add a pinned flag for this. > > Because the lock mechanism of kernfs_rename has been > > changed, in order to maintain data consistency, the creation > > and deletion of kernfs have also been changed accordingly > > I admit that this is really not a very elegant design, but I don’t > > know how to make it better, so I throw out the problem and > > try to seek help from the community. Look at the changes to kernfs for 5.15-rc1 where a lot of the lock contention was removed based on benchmarks where kernfs (through sysfs) was accessed by lots of processes all at once. That should help a bit in your case, but remember, the creation of kernfs files is not the "normal" case, so it is not optimized at all. We have optimized the access case, which is by far the most common. good luck! greg k-h From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greg KH Subject: Re: [RFC PATCH 1/2] add pinned flags for kernfs node Date: Fri, 10 Sep 2021 08:00:00 +0200 Message-ID: References: <3d871bd0-dab5-c9ca-61b9-6aa137fa9fdf@linux.alibaba.com> Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631253623; bh=+oBAtMoawlHInRv8YIXOyVPI7irf8Xkg/tet/t4c8lc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZyJ6ySZWKFHN2JnLx3GxlRZMUgBx7kgGbC9a0TdGLvUn6eAzCKJNLB6FYt8RbhTIC tH8uosog7Dpcb5qWhoF/PeWufLF5PH+ltZFt7ZbLniTMVzCu/jASqdCdDf2BnXcPy8 +hAT0lIPiWbzrj5ofnPY/DVNdlOW0VGKIuXZoflY= Content-Disposition: inline In-Reply-To: <3d871bd0-dab5-c9ca-61b9-6aa137fa9fdf-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> List-ID: Content-Type: text/plain; charset="utf-8" To: "taoyi.ty" Cc: tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, lizefan.x-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, keescook-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org, yzaikin-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, shanpeic-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org On Fri, Sep 10, 2021 at 10:14:28AM +0800, taoyi.ty wrote: > > On 2021/9/8 下午8:35, Greg KH wrote: > > Why are kernfs changes needed for this? kernfs creation is not > > necessarily supposed to be "fast", what benchmark needs this type of > > change to require the addition of this complexity? > > The implementation of the cgroup pool should have nothing > > to do with kernfs, but during the development process, > > I found that when there is a background cpu load, it takes > > a very significant time for a process to get the mutex from > > being awakened to starting execution. > > To create 400 cgroups concurrently, if there is no background > > cpu load, it takes about 80ms, but if the cpu usage rate is > > 40%, it takes about 700ms. If you reduce > > sched_wakeup_granularity_ns, the time consumption will also > > be reduced. If you change mutex to spinlock, the situation > > will be very much improved. > > So to solve this problem, mutex should not be used. The > > cgroup pool relies on kernfs_rename which uses > > kernfs_mutex, so I need to bypass kernfs_mutex and > > add a pinned flag for this. > > Because the lock mechanism of kernfs_rename has been > > changed, in order to maintain data consistency, the creation > > and deletion of kernfs have also been changed accordingly > > I admit that this is really not a very elegant design, but I don’t > > know how to make it better, so I throw out the problem and > > try to seek help from the community. Look at the changes to kernfs for 5.15-rc1 where a lot of the lock contention was removed based on benchmarks where kernfs (through sysfs) was accessed by lots of processes all at once. That should help a bit in your case, but remember, the creation of kernfs files is not the "normal" case, so it is not optimized at all. We have optimized the access case, which is by far the most common. good luck! greg k-h