From: Tejun Heo <tj@kernel.org> To: torvalds@linux-foundation.org, akpm@linux-foundation.org, a.p.zijlstra@chello.nl, mingo@redhat.com, lizefan@huawei.com, hannes@cmpxchg.org, pjt@google.com Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-api@vger.kernel.org, kernel-team@fb.com Subject: Example program for PRIO_RGRP Date: Fri, 11 Mar 2016 11:05:22 -0500 [thread overview] Message-ID: <20160311160522.GA24046@htj.duckdns.org> (raw) In-Reply-To: <1457710888-31182-1-git-send-email-tj@kernel.org> [-- Attachment #1: Type: text/plain, Size: 1202 bytes --] Hello, The attached test-rgrp-burn.c is an example program making use of the new PRIO_RGRP. The test program creates rgroup the following rgroup hierarchy sgroup - main thread + [rgroup-0] burner-0 + [rgroup-1] + [rgroup-2] burner-1 + [rgroup-3] burner-2 and takes upto 4 arguments respectively specifying the nice level for each rgroup. Each burner thread executes CPU burning loops and periodically prints out how many loops it has completed. * "./test-rgrp-burn" If the program is run without any argument, on a kernel which doesn't support rgroup, or from a cgroup where cpu controller is not available, the three burner threads would run at about equivalent speeds. * "./test-rgrp-burn 0" from a cgroup w/ cpu controller cpu controller is enabled across the top-level, so rgroup-0 and rgroup-1 would compete on equal footing, so burner-0 runs twice as fast as burner-1 or burner-2. * "./test-rgrp-burn 0 3 -1 2" from a cgroup w/ cpu controller cpu controller is enabled at both levels. Nice level difference of 3 is about twice difference in weight, so the ratio would roughly be burner-0 : burner-1 : burner-2 ~= 3 : 2 : 1 Thanks. -- tejun [-- Attachment #2: test-rgrp-burn.c --] [-- Type: text/plain, Size: 3315 bytes --] /* * test-rgrp-burn - rgrp test program * * Creates the following rgrp hierarchy of three CPU cycle burner * threads. * * sgrp - main thread * + [rgrp-0] burner thread * + [rgrp-1] + [rgrp-2] nested burner thread * + [rgrp-3] nested burner thread * * Takes upto 4 arguments specifying the nice level of each rgrp. */ #define _GNU_SOURCE #include <sys/types.h> #include <sys/wait.h> #include <sched.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <errno.h> #include <limits.h> #include <pthread.h> #include <sys/syscall.h> #include <sys/time.h> #include <sys/resource.h> #define CLONE_NEWRGRP 0x00001000 /* New resource group */ #define PRIO_RGRP 3 #define STACK_SIZE (4 * 1024 * 1024) #define CLONE_THREAD_FLAGS (CLONE_THREAD | CLONE_SIGHAND | CLONE_VM | \ CLONE_FS | CLONE_FILES) static int nice_val[] = { [0 ... 3] = INT_MIN }; static pthread_mutex_t lprintf_mutex; #define lprintf(fmt, args...) do { \ pthread_mutex_lock(&lprintf_mutex); \ printf(fmt, ##args); \ pthread_mutex_unlock(&lprintf_mutex); \ } while (0) static int gettid(void) { return syscall(SYS_gettid); } static int burner_fn(void *arg) { unsigned long a = 37, cnt = 0; sleep(1); lprintf("burner : %d started\n", gettid()); while (1) { *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; if (!(++cnt % (1000000 * 100))) { int prio; errno = 0; prio = getpriority(PRIO_RGRP, 0); lprintf("burner : %d finished %lum loops (rgrp nice=%d errno=%d)\n", gettid(), cnt / 1000000, prio, errno); } } return 0; } static void rgrp_setprio(pid_t pid, int nice) { if (nice == INT_MIN) return; lprintf("setprio: setting PRIO_RGRP to %d on %d\n", nice, pid); if (setpriority(PRIO_RGRP, pid, nice)) perror("setpriority"); } static int child_fn(void *arg) { char *stack; pid_t pid; stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(burner_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("child : cloned nested burner %d\n", pid); rgrp_setprio(pid, nice_val[2]); stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(burner_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("child : cloned nested burner %d\n", pid); rgrp_setprio(pid, nice_val[3]); sleep(500); return 0; } int main(int argc, char **argv) { char *stack; pid_t pid; int i; if (argc > 5) argc = 5; for (i = 1; i < argc; i++) nice_val[i - 1] = atoi(argv[i]); pthread_mutex_init(&lprintf_mutex, NULL); stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(burner_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("main : cloned burner %d\n", pid); rgrp_setprio(pid, nice_val[0]); stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(child_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("main : cloned child %d\n", pid); rgrp_setprio(pid, nice_val[1]); sleep(500); return 0; }
WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> To: torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, a.p.zijlstra-/NLkJaSkS4VmR6Xm/wNWPw@public.gmane.org, mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, pjt-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-api-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kernel-team-b10kYP2dOMg@public.gmane.org Subject: Example program for PRIO_RGRP Date: Fri, 11 Mar 2016 11:05:22 -0500 [thread overview] Message-ID: <20160311160522.GA24046@htj.duckdns.org> (raw) In-Reply-To: <1457710888-31182-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> [-- Attachment #1: Type: text/plain, Size: 1202 bytes --] Hello, The attached test-rgrp-burn.c is an example program making use of the new PRIO_RGRP. The test program creates rgroup the following rgroup hierarchy sgroup - main thread + [rgroup-0] burner-0 + [rgroup-1] + [rgroup-2] burner-1 + [rgroup-3] burner-2 and takes upto 4 arguments respectively specifying the nice level for each rgroup. Each burner thread executes CPU burning loops and periodically prints out how many loops it has completed. * "./test-rgrp-burn" If the program is run without any argument, on a kernel which doesn't support rgroup, or from a cgroup where cpu controller is not available, the three burner threads would run at about equivalent speeds. * "./test-rgrp-burn 0" from a cgroup w/ cpu controller cpu controller is enabled across the top-level, so rgroup-0 and rgroup-1 would compete on equal footing, so burner-0 runs twice as fast as burner-1 or burner-2. * "./test-rgrp-burn 0 3 -1 2" from a cgroup w/ cpu controller cpu controller is enabled at both levels. Nice level difference of 3 is about twice difference in weight, so the ratio would roughly be burner-0 : burner-1 : burner-2 ~= 3 : 2 : 1 Thanks. -- tejun [-- Attachment #2: test-rgrp-burn.c --] [-- Type: text/plain, Size: 3315 bytes --] /* * test-rgrp-burn - rgrp test program * * Creates the following rgrp hierarchy of three CPU cycle burner * threads. * * sgrp - main thread * + [rgrp-0] burner thread * + [rgrp-1] + [rgrp-2] nested burner thread * + [rgrp-3] nested burner thread * * Takes upto 4 arguments specifying the nice level of each rgrp. */ #define _GNU_SOURCE #include <sys/types.h> #include <sys/wait.h> #include <sched.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <errno.h> #include <limits.h> #include <pthread.h> #include <sys/syscall.h> #include <sys/time.h> #include <sys/resource.h> #define CLONE_NEWRGRP 0x00001000 /* New resource group */ #define PRIO_RGRP 3 #define STACK_SIZE (4 * 1024 * 1024) #define CLONE_THREAD_FLAGS (CLONE_THREAD | CLONE_SIGHAND | CLONE_VM | \ CLONE_FS | CLONE_FILES) static int nice_val[] = { [0 ... 3] = INT_MIN }; static pthread_mutex_t lprintf_mutex; #define lprintf(fmt, args...) do { \ pthread_mutex_lock(&lprintf_mutex); \ printf(fmt, ##args); \ pthread_mutex_unlock(&lprintf_mutex); \ } while (0) static int gettid(void) { return syscall(SYS_gettid); } static int burner_fn(void *arg) { unsigned long a = 37, cnt = 0; sleep(1); lprintf("burner : %d started\n", gettid()); while (1) { *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; *(volatile unsigned long *)&a = a * 37 / 13 + 53; if (!(++cnt % (1000000 * 100))) { int prio; errno = 0; prio = getpriority(PRIO_RGRP, 0); lprintf("burner : %d finished %lum loops (rgrp nice=%d errno=%d)\n", gettid(), cnt / 1000000, prio, errno); } } return 0; } static void rgrp_setprio(pid_t pid, int nice) { if (nice == INT_MIN) return; lprintf("setprio: setting PRIO_RGRP to %d on %d\n", nice, pid); if (setpriority(PRIO_RGRP, pid, nice)) perror("setpriority"); } static int child_fn(void *arg) { char *stack; pid_t pid; stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(burner_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("child : cloned nested burner %d\n", pid); rgrp_setprio(pid, nice_val[2]); stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(burner_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("child : cloned nested burner %d\n", pid); rgrp_setprio(pid, nice_val[3]); sleep(500); return 0; } int main(int argc, char **argv) { char *stack; pid_t pid; int i; if (argc > 5) argc = 5; for (i = 1; i < argc; i++) nice_val[i - 1] = atoi(argv[i]); pthread_mutex_init(&lprintf_mutex, NULL); stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(burner_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("main : cloned burner %d\n", pid); rgrp_setprio(pid, nice_val[0]); stack = malloc(STACK_SIZE) + STACK_SIZE; pid = clone(child_fn, stack, CLONE_THREAD_FLAGS | CLONE_NEWRGRP, NULL); lprintf("main : cloned child %d\n", pid); rgrp_setprio(pid, nice_val[1]); sleep(500); return 0; }
next prev parent reply other threads:[~2016-03-11 16:05 UTC|newest] Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-03-11 15:41 [PATCHSET RFC cgroup/for-4.6] cgroup, sched: implement resource group and PRIO_RGRP Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 15:41 ` [PATCH 01/10] cgroup: introduce cgroup_[un]lock() Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 15:41 ` [PATCH 02/10] cgroup: un-inline cgroup_path() and friends Tejun Heo 2016-03-11 15:41 ` [PATCH 03/10] cgroup: introduce CGRP_MIGRATE_* flags Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 15:41 ` [PATCH 04/10] signal: make put_signal_struct() public Tejun Heo 2016-03-11 15:41 ` [PATCH 05/10] cgroup, fork: add @new_rgrp_cset[p] and @clone_flags to cgroup fork callbacks Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 15:41 ` [PATCH 06/10] cgroup, fork: add @child and @clone_flags to threadgroup_change_begin/end() Tejun Heo 2016-03-11 15:41 ` [PATCH 07/10] cgroup: introduce resource group Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 15:41 ` [PATCH 08/10] cgroup: implement rgroup control mask handling Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 15:41 ` [PATCH 09/10] cgroup: implement rgroup subtree migration Tejun Heo 2016-03-11 15:41 ` [PATCH 10/10] cgroup, sched: implement PRIO_RGRP for {set|get}priority() Tejun Heo 2016-03-11 15:41 ` Tejun Heo 2016-03-11 16:05 ` Tejun Heo [this message] 2016-03-11 16:05 ` Example program for PRIO_RGRP Tejun Heo 2016-03-12 6:26 ` [PATCHSET RFC cgroup/for-4.6] cgroup, sched: implement resource group and PRIO_RGRP Mike Galbraith 2016-03-12 6:26 ` Mike Galbraith 2016-03-12 17:04 ` Mike Galbraith 2016-03-12 17:04 ` Mike Galbraith 2016-03-12 17:13 ` cgroup NAKs ignored? " Ingo Molnar 2016-03-12 17:13 ` Ingo Molnar 2016-03-13 14:42 ` Tejun Heo 2016-03-13 14:42 ` Tejun Heo 2016-03-13 15:00 ` Tejun Heo 2016-03-13 15:00 ` Tejun Heo 2016-03-13 17:40 ` Mike Galbraith 2016-03-13 17:40 ` Mike Galbraith 2016-04-07 0:00 ` Tejun Heo 2016-04-07 0:00 ` Tejun Heo 2016-04-07 3:26 ` Mike Galbraith 2016-04-07 3:26 ` Mike Galbraith 2016-03-14 2:23 ` Mike Galbraith 2016-03-14 2:23 ` Mike Galbraith 2016-03-14 11:30 ` Peter Zijlstra 2016-03-14 11:30 ` Peter Zijlstra 2016-04-06 15:58 ` Tejun Heo 2016-04-06 15:58 ` Tejun Heo 2016-04-06 15:58 ` Tejun Heo 2016-04-07 6:45 ` Peter Zijlstra 2016-04-07 6:45 ` Peter Zijlstra 2016-04-07 7:35 ` Johannes Weiner 2016-04-07 7:35 ` Johannes Weiner 2016-04-07 8:05 ` Mike Galbraith 2016-04-07 8:05 ` Mike Galbraith 2016-04-07 8:08 ` Peter Zijlstra 2016-04-07 8:08 ` Peter Zijlstra 2016-04-07 9:28 ` Johannes Weiner 2016-04-07 9:28 ` Johannes Weiner 2016-04-07 10:42 ` Peter Zijlstra 2016-04-07 10:42 ` Peter Zijlstra 2016-04-07 19:45 ` Tejun Heo 2016-04-07 19:45 ` Tejun Heo 2016-04-07 20:25 ` Peter Zijlstra 2016-04-07 20:25 ` Peter Zijlstra 2016-04-08 20:11 ` Tejun Heo 2016-04-08 20:11 ` Tejun Heo 2016-04-09 6:16 ` Mike Galbraith 2016-04-09 6:16 ` Mike Galbraith 2016-04-09 13:39 ` Peter Zijlstra 2016-04-09 13:39 ` Peter Zijlstra 2016-04-12 22:29 ` Tejun Heo 2016-04-12 22:29 ` Tejun Heo 2016-04-13 7:43 ` Mike Galbraith 2016-04-13 7:43 ` Mike Galbraith 2016-04-13 15:59 ` Tejun Heo 2016-04-13 19:15 ` Mike Galbraith 2016-04-13 19:15 ` Mike Galbraith 2016-04-14 6:07 ` Mike Galbraith 2016-04-14 19:57 ` Tejun Heo 2016-04-14 19:57 ` Tejun Heo 2016-04-15 2:42 ` Mike Galbraith 2016-04-15 2:42 ` Mike Galbraith 2016-04-09 16:02 ` Peter Zijlstra 2016-04-09 16:02 ` Peter Zijlstra 2016-04-07 8:28 ` Peter Zijlstra 2016-04-07 8:28 ` Peter Zijlstra 2016-04-07 19:04 ` Johannes Weiner 2016-04-07 19:04 ` Johannes Weiner 2016-04-07 19:31 ` Peter Zijlstra 2016-04-07 19:31 ` Peter Zijlstra 2016-04-07 20:23 ` Johannes Weiner 2016-04-07 20:23 ` Johannes Weiner 2016-04-08 3:13 ` Mike Galbraith 2016-04-08 3:13 ` Mike Galbraith 2016-03-15 17:21 ` Michal Hocko 2016-03-15 17:21 ` Michal Hocko 2016-04-06 21:53 ` Tejun Heo 2016-04-06 21:53 ` Tejun Heo 2016-04-07 6:40 ` Peter Zijlstra 2016-04-07 6:40 ` Peter Zijlstra
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20160311160522.GA24046@htj.duckdns.org \ --to=tj@kernel.org \ --cc=a.p.zijlstra@chello.nl \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=hannes@cmpxchg.org \ --cc=kernel-team@fb.com \ --cc=linux-api@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=lizefan@huawei.com \ --cc=mingo@redhat.com \ --cc=pjt@google.com \ --cc=torvalds@linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.