From: "Michal Koutný" <mkoutny@suse.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@kernel.org>,
Christopher Lameter <cl@linux.com>,
LKML <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org
Subject: Re: SLUB: purpose of sysfs events on cache creation/removal
Date: Fri, 17 Jan 2020 18:13:31 +0100 [thread overview]
Message-ID: <20200117171331.GA17179@blackbody.suse.cz> (raw)
In-Reply-To: <20200109114415.cf01bd3ad30c5c4aec981653@linux-foundation.org>
[-- Attachment #1: Type: text/plain, Size: 1442 bytes --]
Hello.
On Thu, Jan 09, 2020 at 11:44:15AM -0800, Andrew Morton <akpm@linux-foundation.org> wrote:
> I looked at it - there wasn't really any compelling followup.
FTR, I noticed udevd consuming non-negligible CPU cycles when doing some
cgroup stress testing. And even extrapolating to less artificial
situations, the udev events seem to cause useless tickling of udevd.
I used the simple script below
cat measure.sh <<EOD
sample() {
local n=$(echo|awk "END {print int(40/$1)}")
for i in $(seq $n) ; do
mkdir /sys/fs/cgroup/memory/grp1 ;
echo 0 >/sys/fs/cgroup/memory/grp1/cgroup.procs ;
/usr/bin/sleep $1 ;
echo 0 >/sys/fs/cgroup/memory/cgroup.procs ;
rmdir /sys/fs/cgroup/memory/grp1 ;
done
}
for d in 0.004 0.008 0.016 0.032 0.064 0.128 0.256 0.5 1 ; do
echo 0 >/sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
time sample $d 2>&1 | grep real
echo -n "udev "
cat /sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
done
EOD
and I drew the following ballpark conclusion:
1.7% CPU time at 1 event/s -> 60 event/s 100% cpu
(The event is one mkdir/migrate/rmdir sequence. Numbers are from dummy
test VM, so take with a grain of salt.)
> If this change should be pursued then can we please have a formal
> resend?
Who's supposed to do that?
Regards,
Michal
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2020-01-17 17:13 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-26 12:19 SLUB: purpose of sysfs events on cache creation/removal Michal Hocko
2019-11-26 16:32 ` Christopher Lameter
2019-11-26 16:54 ` Michal Hocko
2019-11-27 15:40 ` Christopher Lameter
2019-11-27 16:24 ` Michal Hocko
2019-11-27 16:26 ` Christopher Lameter
2019-11-27 17:43 ` Michal Hocko
2019-12-04 13:28 ` Michal Hocko
2019-12-04 15:25 ` Christopher Lameter
2019-12-04 15:32 ` Michal Hocko
2019-12-04 16:53 ` Christopher Lameter
2019-12-04 17:32 ` Michal Hocko
2020-01-06 11:57 ` Michal Hocko
2020-01-06 15:51 ` Christopher Lameter
2020-01-09 14:52 ` Michal Hocko
2020-01-09 19:44 ` Andrew Morton
2020-01-09 20:13 ` Michal Hocko
2020-01-09 20:15 ` Michal Hocko
2020-01-17 17:13 ` Michal Koutný [this message]
2020-01-19 0:15 ` Andrew Morton
2020-01-27 17:33 ` Michal Koutný
2020-01-27 23:04 ` Christopher Lameter
2020-01-28 8:51 ` Michal Koutný
2020-01-28 18:13 ` Christopher Lameter
2020-01-30 13:16 ` Vlastimil Babka
2020-01-09 14:07 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200117171331.GA17179@blackbody.suse.cz \
--to=mkoutny@suse.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).