All of lore.kernel.org
 help / color / mirror / Atom feed
* [LTP] [PATCH v1] cgroup_fj_stress: Avoid killall
@ 2019-11-05 11:20 Clemens Famulla-Conrad
  2019-11-05 13:20 ` Petr Vorel
  0 siblings, 1 reply; 4+ messages in thread
From: Clemens Famulla-Conrad @ 2019-11-05 11:20 UTC (permalink / raw)
  To: ltp

We discovered problems that killall didn't catched all processes. With
this patch, we collect the pids manually and kill them one after the
other.

Signed-off-by: Clemens Famulla-Conrad <cfamullaconrad@suse.de>
---
 testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
index 698aa4979..27ea7634a 100755
--- a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
+++ b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
@@ -74,6 +74,7 @@ setup
 export TMPFILE=./tmp_tasks.$$
 
 count=0
+collected_pids=""
 
 build_subgroups()
 {
@@ -107,6 +108,7 @@ attach_task()
     if [ -z "$ppid" ]; then
         cgroup_fj_proc&
         pid=$!
+        collected_pids="$collected_pids $pid"
     else
         pid="$ppid"
     fi
@@ -148,9 +150,10 @@ case $attach_operation in
 "each" )
     tst_resm TINFO "Attaching task to each subgroup"
     attach_task "$start_path" 0
-    ROD killall -9 "cgroup_fj_proc"
-    # Wait for attached tasks to terminate
-    wait
+    for pid in $collected_pids; do
+        ROD kill -9 "$pid"
+        wait "$pid"
+    done
     ;;
 *  )
     ;;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [LTP] [PATCH v1] cgroup_fj_stress: Avoid killall
  2019-11-05 11:20 [LTP] [PATCH v1] cgroup_fj_stress: Avoid killall Clemens Famulla-Conrad
@ 2019-11-05 13:20 ` Petr Vorel
  2019-11-05 13:49   ` Clemens Famulla-Conrad
  0 siblings, 1 reply; 4+ messages in thread
From: Petr Vorel @ 2019-11-05 13:20 UTC (permalink / raw)
  To: ltp

Hi Clements,

> We discovered problems that killall didn't catched all processes. With
> this patch, we collect the pids manually and kill them one after the
> other.

LGTM.
I wonder if we also want to kill cgroup_fj_proc this way (see cgroup_fj_common.sh).

I guess you're not planning to create minimal reproducer to prove the problem of
left processes after killall, are you?

> Signed-off-by: Clemens Famulla-Conrad <cfamullaconrad@suse.de>
> ---
>  testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)

> diff --git a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
> index 698aa4979..27ea7634a 100755
> --- a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
> +++ b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
> @@ -74,6 +74,7 @@ setup
>  export TMPFILE=./tmp_tasks.$$

>  count=0
> +collected_pids=""
nit:
collected_pids=

...

Kind regards,
Petr

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [LTP] [PATCH v1] cgroup_fj_stress: Avoid killall
  2019-11-05 13:20 ` Petr Vorel
@ 2019-11-05 13:49   ` Clemens Famulla-Conrad
  2019-11-06 17:12     ` Petr Vorel
  0 siblings, 1 reply; 4+ messages in thread
From: Clemens Famulla-Conrad @ 2019-11-05 13:49 UTC (permalink / raw)
  To: ltp

Hi Petr,

On Tue, 2019-11-05 at 14:20 +0100, Petr Vorel wrote:
<snip>
> I wonder if we also want to kill cgroup_fj_proc this way (see
> cgroup_fj_common.sh).

I'm not sure if I understand you. We do kill cgroup_fj_proc this way.
The `killall -9 cgroup_fj_proc` call in cgrouip_fj_common.sh looks for
me like a cleaner and there is no `wait` or similar afterwards, so I
would guess we are not facing the problem here. And I would keep
killall here.
As far as I can see, all other `cgroup_fj_proc&` calls already kill
them separately.

> I guess you're not planning to create minimal reproducer to prove the
> problem of
> left processes after killall, are you?

Sure nice idea, I can give it a try. But not within this patchset.

Thanks
Clemens

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [LTP] [PATCH v1] cgroup_fj_stress: Avoid killall
  2019-11-05 13:49   ` Clemens Famulla-Conrad
@ 2019-11-06 17:12     ` Petr Vorel
  0 siblings, 0 replies; 4+ messages in thread
From: Petr Vorel @ 2019-11-06 17:12 UTC (permalink / raw)
  To: ltp

Hi Clemens,

> > I wonder if we also want to kill cgroup_fj_proc this way (see
> > cgroup_fj_common.sh).

> I'm not sure if I understand you. We do kill cgroup_fj_proc this way.
> The `killall -9 cgroup_fj_proc` call in cgrouip_fj_common.sh looks for
> me like a cleaner and there is no `wait` or similar afterwards, so I
> would guess we are not facing the problem here. And I would keep
> killall here.
> As far as I can see, all other `cgroup_fj_proc&` calls already kill
> them separately.
OK, merged :).
Thanks!

> > I guess you're not planning to create minimal reproducer to prove the
> > problem of
> > left processes after killall, are you?

> Sure nice idea, I can give it a try. But not within this patchset.
Thanks!

Kind regards,
Petr

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-11-06 17:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05 11:20 [LTP] [PATCH v1] cgroup_fj_stress: Avoid killall Clemens Famulla-Conrad
2019-11-05 13:20 ` Petr Vorel
2019-11-05 13:49   ` Clemens Famulla-Conrad
2019-11-06 17:12     ` Petr Vorel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.