linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Riccardo Mancini <rickyman7@gmail.com>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
	Ian Rogers <irogers@google.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Mark Rutland <mark.rutland@arm.com>, Jiri Olsa <jolsa@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-perf-users <linux-perf-users@vger.kernel.org>
Subject: Re: [RFC PATCH 04/10] perf workqueue: add threadpool execute and wait functions
Date: Fri, 16 Jul 2021 15:55:39 +0200	[thread overview]
Message-ID: <b82d7c8f1c1d48f9a79d30d2e76cf56b1f7c8ee0.camel@gmail.com> (raw)
In-Reply-To: <CAM9d7ci9vCBsyYAj6kXUAtEufG9PW0D=1-7PP-VrA_PPU6wbkA@mail.gmail.com>

Hi Namhyung,
thanks again for the review.

On Thu, 2021-07-15 at 16:56 -0700, Namhyung Kim wrote:
> On Tue, Jul 13, 2021 at 5:11 AM Riccardo Mancini <rickyman7@gmail.com> wrote:
> > 
> > This patch adds:
> >  - execute_in_threadpool: assigns a task to the threads to execute
> >    asynchronously.
> >  - wait_threadpool: waits for the task to complete on all threads.
> > Furthermore, testing for these new functions is added.
> > 
> > This patch completes the threadpool.
> > 
> > Signed-off-by: Riccardo Mancini <rickyman7@gmail.com>
> > ---
> >  tools/perf/tests/workqueue.c           |  86 ++++++++++++++++++++-
> >  tools/perf/util/workqueue/threadpool.c | 103 +++++++++++++++++++++++++
> >  tools/perf/util/workqueue/threadpool.h |   5 ++
> >  3 files changed, 193 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/perf/tests/workqueue.c b/tools/perf/tests/workqueue.c
> > index be377e9897bab4e9..3c64db8203556847 100644
> > --- a/tools/perf/tests/workqueue.c
> > +++ b/tools/perf/tests/workqueue.c
> > @@ -1,13 +1,59 @@
> >  // SPDX-License-Identifier: GPL-2.0
> > +#include <stdlib.h>
> >  #include <linux/kernel.h>
> > +#include <linux/zalloc.h>
> >  #include "tests.h"
> >  #include "util/debug.h"
> >  #include "util/workqueue/threadpool.h"
> > 
> > +#define DUMMY_FACTOR 100000
> > +#define N_DUMMY_WORK_SIZES 7
> > +
> >  struct threadpool_test_args_t {
> >         int pool_size;
> >  };
> > 
> > +struct test_task {
> > +       struct task_struct task;
> > +       int n_threads;
> > +       int *array;
> > +};
> > +
> > +/**
> > + * dummy_work - calculates DUMMY_FACTOR * (idx % N_DUMMY_WORK_SIZES)
> > inefficiently
> > + *
> > + * This function uses modulus to create work items of different sizes.
> > + */
> > +static void dummy_work(int idx)
> > +{
> > +       int prod = 0;
> 
> I'm not sure but having 'volatile' would prevent some kind of
> possible compiler optimizations..

Agreed.

> 
> > +       int k = idx % N_DUMMY_WORK_SIZES;
> > +       int i, j;
> > +
> > +       for (i = 0; i < DUMMY_FACTOR; i++)
> > +               for (j = 0; j < k; j++)
> > +                       prod ++;
> > +
> > +       pr_debug3("dummy: %d * %d = %d\n", DUMMY_FACTOR, k, prod);
> > +}
> > +
> > +static void test_task_fn1(int tidx, struct task_struct *task)
> > +{
> > +       struct test_task *mtask = container_of(task, struct test_task,
> > task);
> > +
> > +       dummy_work(tidx);
> > +       mtask->array[tidx] = tidx+1;
> > +}
> > +
> > +static void test_task_fn2(int tidx, struct task_struct *task)
> > +{
> > +       struct test_task *mtask = container_of(task, struct test_task,
> > task);
> > +
> > +       dummy_work(tidx);
> > +       mtask->array[tidx] = tidx*2;
> > +}
> > +
> > +
> >  static int __threadpool__prepare(struct threadpool_struct **pool, int
> > pool_size)
> >  {
> >         int ret;
> > @@ -38,21 +84,59 @@ static int __threadpool__teardown(struct
> > threadpool_struct *pool)
> >         return 0;
> >  }
> > 
> > +static int __threadpool__exec_wait(struct threadpool_struct *pool,
> > +                               struct task_struct *task)
> > +{
> > +       int ret;
> > +
> > +       ret = execute_in_threadpool(pool, task);
> > +       TEST_ASSERT_VAL("threadpool execute failure", ret == 0);
> > +       TEST_ASSERT_VAL("threadpool is not executing",
> > threadpool_is_busy(pool));
> > +
> > +       ret = wait_threadpool(pool);
> > +       TEST_ASSERT_VAL("threadpool wait failure", ret == 0);
> > +       TEST_ASSERT_VAL("waited threadpool is not ready",
> > threadpool_is_ready(pool));
> > +
> > +       return 0;
> > +}
> > 
> >  static int __test__threadpool(void *_args)
> >  {
> >         struct threadpool_test_args_t *args = _args;
> >         struct threadpool_struct *pool;
> > -       int ret;
> > +       int ret, i;
> > +       struct test_task task;
> > +
> > +       task.task.fn = test_task_fn1;
> > +       task.n_threads = args->pool_size;
> > +       task.array = calloc(args->pool_size, sizeof(*task.array));
> 
> Need to check the return value.

Thanks.

> 
> > 
> >         ret = __threadpool__prepare(&pool, args->pool_size);
> >         if (ret)
> >                 return ret;
> > 
> > +       ret = __threadpool__exec_wait(pool, &task.task);
> > +       if (ret)
> > +               return ret;
> > +
> > +       for (i = 0; i < args->pool_size; i++)
> > +               TEST_ASSERT_VAL("failed array check (1)", task.array[i] ==
> > i+1);
> > +
> > +       task.task.fn = test_task_fn2;
> > +
> > +       ret = __threadpool__exec_wait(pool, &task.task);
> > +       if (ret)
> > +               return ret;
> > +
> > +       for (i = 0; i < args->pool_size; i++)
> > +               TEST_ASSERT_VAL("failed array check (2)", task.array[i] ==
> > 2*i);
> > +
> >         ret = __threadpool__teardown(pool);
> >         if (ret)
> >                 return ret;
> > 
> > +       free(task.array);
> 
> All previous returns will leak it.

Oh, right.

Thanks,
Riccardo

> 
> Thanks,
> Namhyung



  reply	other threads:[~2021-07-16 13:55 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-13 12:11 [RFC PATCH 00/10] perf: add workqueue library and use it in synthetic-events Riccardo Mancini
2021-07-13 12:11 ` [RFC PATCH 01/10] perf workqueue: threadpool creation and destruction Riccardo Mancini
2021-07-14 14:16   ` Arnaldo Carvalho de Melo
2021-07-15 16:31     ` Riccardo Mancini
2021-07-15 20:48       ` Arnaldo Carvalho de Melo
2021-07-15 23:29     ` Namhyung Kim
2021-07-16 13:36       ` Riccardo Mancini
2021-07-19 19:39         ` Namhyung Kim
2021-07-13 12:11 ` [RFC PATCH 02/10] perf tests: add test for workqueue Riccardo Mancini
2021-07-14 15:10   ` Arnaldo Carvalho de Melo
2021-07-15 16:33     ` Riccardo Mancini
2021-07-13 12:11 ` [RFC PATCH 03/10] perf workqueue: add threadpool start and stop functions Riccardo Mancini
2021-07-14 15:15   ` Arnaldo Carvalho de Melo
2021-07-15 16:42     ` Riccardo Mancini
2021-07-15 20:43       ` Arnaldo Carvalho de Melo
2021-07-15 23:48   ` Namhyung Kim
2021-07-16 13:53     ` Riccardo Mancini
2021-07-16 16:29       ` Arnaldo Carvalho de Melo
2021-07-13 12:11 ` [RFC PATCH 04/10] perf workqueue: add threadpool execute and wait functions Riccardo Mancini
2021-07-15 23:56   ` Namhyung Kim
2021-07-16 13:55     ` Riccardo Mancini [this message]
2021-07-13 12:11 ` [RFC PATCH 05/10] perf workqueue: add sparse annotation header Riccardo Mancini
2021-07-13 12:11 ` [RFC PATCH 06/10] perf workqueue: introduce workqueue struct Riccardo Mancini
2021-07-14 15:22   ` Arnaldo Carvalho de Melo
2021-07-15 16:49     ` Riccardo Mancini
2021-07-15 20:47       ` Arnaldo Carvalho de Melo
2021-07-13 12:11 ` [RFC PATCH 07/10] perf workqueue: implement worker thread and management Riccardo Mancini
2021-07-13 12:11 ` [RFC PATCH 08/10] perf workqueue: add queue_work and flush_workqueue functions Riccardo Mancini
2021-07-13 12:11 ` [RFC PATCH 09/10] perf workqueue: add utility to execute a for loop in parallel Riccardo Mancini
2021-07-13 12:11 ` [RFC PATCH 10/10] perf synthetic-events: use workqueue parallel_for Riccardo Mancini
2021-07-13 19:14 ` [RFC PATCH 00/10] perf: add workqueue library and use it in synthetic-events Arnaldo Carvalho de Melo
2021-07-19 21:13 ` Jiri Olsa
2021-07-22 16:15   ` Riccardo Mancini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b82d7c8f1c1d48f9a79d30d2e76cf56b1f7c8ee0.camel@gmail.com \
    --to=rickyman7@gmail.com \
    --cc=acme@kernel.org \
    --cc=irogers@google.com \
    --cc=jolsa@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).