All of lore.kernel.org
 help / color / mirror / Atom feed
From: Li Wang <liwang@redhat.com>
To: ltp@lists.linux.it
Subject: [LTP] [PATCH v2 6/6] sched/cgroup: Add cfs_bandwidth01
Date: Mon, 31 May 2021 13:33:35 +0800	[thread overview]
Message-ID: <CAEemH2fww4Zwqh6E_C+R9erUpXbebKUzS2eHe4JT9LXnsgLUGQ@mail.gmail.com> (raw)
In-Reply-To: <87k0njjj11.fsf@suse.de>

Hi Richard,

> >> +static void do_test(void)
> >> +{
> >> +       size_t i;
> >> +
> >> +       for (i = 0; i < ARRAY_SIZE(cg_workers); i++)
> >> +               fork_busy_procs_in_cgroup(cg_workers[i]);
> >> +
> >> +       tst_res(TPASS, "Scheduled bandwidth constrained workers");
> >> +
> >> +       sleep(1);
> >> +
> >> +       set_cpu_quota(cg_level2, 50);
> >
> > This test itself looks good.
> > But I got a series of warnings when testing on CGroup V1:
>
> Thanks for testing it.
>
> >
> > # uname -r
> > 4.18.0-296.el8.x86_64
> >
> > [root@dhcp-66-83-181 cfs-scheduler]# ./cfs_bandwidth01
> > tst_test.c:1313: TINFO: Timeout per run is 0h 05m 00s
> > tst_buffers.c:55: TINFO: Test is using guarded buffers
> > cfs_bandwidth01.c:48: TINFO: Set 'worker1/cpu.max' = '3000 10000'
> > cfs_bandwidth01.c:48: TINFO: Set 'worker2/cpu.max' = '2000 10000'
> > cfs_bandwidth01.c:48: TINFO: Set 'worker3/cpu.max' = '3000 10000'
> > cfs_bandwidth01.c:111: TPASS: Scheduled bandwidth constrained workers
> > cfs_bandwidth01.c:42: TBROK:
> > vdprintf(10</sys/fs/cgroup/cpu,cpuacct/ltp/test-8450/level2>,
> > 'cpu.cfs_quota_us', '%u'<5000>): EINVAL (22)
>
> I wonder if your kernel disallows setting this on a trunk node after it
> has been set on leaf nodes (with or without procs in)?

After looking a while, I think the CGrup V1 disallows the parent quota
less than the max value of its children.

This means we should set in level2 at least '3000/10000', just like what
we did for level3.

  cfs_bandwidth01.c:48: TINFO: Set 'worker1/cpu.max' = '3000 10000'
  cfs_bandwidth01.c:48: TINFO: Set 'worker2/cpu.max' = '2000 10000'
  cfs_bandwidth01.c:48: TINFO: Set 'worker3/cpu.max' = '3000 10000'

But in the failure, it shows level2 only set to 5000/100000 (far less than
3000/10000), that's because function set_cpu_quota changes the system
default value 'cpu.cfs_period_us' from 100000 to 10000.

To verify my suppose, I got all PASS when changing it back to default 100000.

--- a/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
+++ b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
@@ -31,7 +31,7 @@ static struct tst_cgroup_group *cg_workers[3];
 static void set_cpu_quota(const struct tst_cgroup_group *const cg,
                          const float quota_percent)
 {
-       const unsigned int period_us = 10000;
+       const unsigned int period_us = 100000;
        const unsigned int quota_us = (quota_percent / 100) * (float)period_us;

        if (TST_CGROUP_VER(cg, "cpu") != TST_CGROUP_V1) {


# ./cfs_bandwidth01
tst_test.c:1313: TINFO: Timeout per run is 0h 05m 00s
tst_buffers.c:55: TINFO: Test is using guarded buffers
cfs_bandwidth01.c:48: TINFO: Set 'worker1/cpu.max' = '30000 100000'
cfs_bandwidth01.c:48: TINFO: Set 'worker2/cpu.max' = '20000 100000'
cfs_bandwidth01.c:48: TINFO: Set 'worker3/cpu.max' = '30000 100000'
cfs_bandwidth01.c:111: TPASS: Scheduled bandwidth constrained workers
cfs_bandwidth01.c:48: TINFO: Set 'level2/cpu.max' = '50000 100000'
cfs_bandwidth01.c:122: TPASS: Workers exited

Summary:
passed   2
failed   0
broken   0
skipped  0
warnings 0


> > unlinkat(10</sys/fs/cgroup/cpu,cpuacct/ltp/test-8450/level2>,
> > 'level3b', AT_REMOVEDIR): EBUSY (16)
> > tst_cgroup.c:896: TWARN:
> > unlinkat(9</sys/fs/cgroup/cpu,cpuacct/ltp/test-8450>, 'level2',
> > AT_REMOVEDIR): EBUSY (16)
> > tst_cgroup.c:766: TWARN: unlinkat(7</sys/fs/cgroup/cpu,cpuacct/ltp>,
> > 'test-8450', AT_REMOVEDIR): EBUSY (16)
>
> This happens because the child processes are still running at cleanup
> because we skipped stopping them. I guess I should fix that.

+1

Patchset looks good with adding the above two fixes.

Reviewed-by: Li Wang <liwang@redhat.com>

-- 
Regards,
Li Wang


  reply	other threads:[~2021-05-31  5:33 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-21 10:25 [LTP] [PATCH v2 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
2021-05-21 10:25 ` [LTP] [PATCH v2 1/6] API/cgroups: Allow fetching of CGroup name Richard Palethorpe
2021-05-21 10:25 ` [LTP] [PATCH v2 2/6] API/cgroups: Remove obsolete function in API Richard Palethorpe
2021-05-21 10:25 ` [LTP] [PATCH v2 3/6] API/cgroups: Add cpu controller Richard Palethorpe
2021-05-21 10:25 ` [LTP] [PATCH v2 4/6] API/cgroups: Auto add controllers to subtree_control in new subgroup Richard Palethorpe
2021-05-21 10:25 ` [LTP] [PATCH v2 5/6] API/cgroups: tst_require fail gracefully with unknown controller Richard Palethorpe
2021-05-27 13:18   ` Li Wang
2021-05-27 15:14     ` Richard Palethorpe
2021-05-28  8:22       ` Li Wang
2021-05-21 10:25 ` [LTP] [PATCH v2 6/6] sched/cgroup: Add cfs_bandwidth01 Richard Palethorpe
2021-05-27 13:26   ` Li Wang
2021-05-28  9:37     ` Richard Palethorpe
2021-05-31  5:33       ` Li Wang [this message]
2021-05-31  6:02         ` Li Wang
2021-06-01 10:42           ` Richard Palethorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEemH2fww4Zwqh6E_C+R9erUpXbebKUzS2eHe4JT9LXnsgLUGQ@mail.gmail.com \
    --to=liwang@redhat.com \
    --cc=ltp@lists.linux.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.