All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andreas Mohr <andi@lisas.de>
To: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	Steven Rostedt <rostedt@goodmis.org>, Jan Kara <jack@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	"Rafael J . Wysocki" <rjw@rjwysocki.net>,
	Eric Biederman <ebiederm@xmission.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jiri Slaby <jslaby@suse.com>, Pavel Machek <pavel@ucw.cz>,
	Andreas Mohr <andi@lisas.de>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC][PATCHv3 2/5] printk: introduce printing kernel thread
Date: Thu, 29 Jun 2017 08:26:20 +0200	[thread overview]
Message-ID: <20170629062620.GA26558@rhlx01.hs-esslingen.de> (raw)
In-Reply-To: <20170628121925.GN1538@pathway.suse.cz>

On Wed, Jun 28, 2017 at 02:19:25PM +0200, Petr Mladek wrote:
> On Wed 2017-05-31 16:22:33, Sergey Senozhatsky wrote:
> > so I try to minimize the negative impact of RT prio here. printk_kthread
> > is not special any more. it's an auxiliary kthread that we sometimes
> > wake_up. the thing is that printk_kthread also must offload at some
> > point, basically the same `atomic_print_limit' limit applies to it as
> > well.
> 
> You might call cond_resched() outside console_unlock(). But you have
> to keep printk_kthread in runnable state as long as there are pending
> messages. Then scheduler will always prefer this RT task over non-RT
> tasks. Or am I wrong?

Not sure whether I mentioned this advice before, but:
I believe we should strive to achieve a design where
cond_resched() etc. is *not* needed -
cond_resched() / sleep() etc. likely are signs of extended code smell:
one should strive to achieve handling which has
a properly *precisely*/*strictly* handshaked
request/response (producer/consumer) communication protocol.
I.e., IPC mechanism objects (mutex etc.).
That way, one avoids
the polling-type, imprecise-type "are we there yet? is it our job now?" handling
and instead uses
properly precise (thus, *not* needlessly inefficient!)
scheduler wakeup mechanisms.

Thus, it's "merely" (hah!) a matter of
designing handling where responsibilities / transitions are clearly spelt out,
thus end up as
properly precisely implemented notifications via IPC mechanisms.


For a very simple setup (which quite possibly cannot be done this easily!),
things could be:

printk_work_wakeup_one_worker()
{
  reliably_notify_only_one_available_computing_hardware_resource(); <-- kernel mechanism available, I think
}

void printk_work_dump_my_package(work_count_max)
{
  while (++work_count < work_count_max)
    printk_work_dump_element();
}


void printk_work_dump_handler(work_count_max)
{
  --> added mutex to have continuation check be done within *inner* atomic handling section (avoid race window).
  printk_work_dump_my_package(work_count_max);

  take_mutex();
  bool all_done = !have_payload_remain;
  bool need_continuation = !(all_done);
  if (need_continuation)
    printk_work_wakeup_one_worker();
  release_mutex();
}


worker thread frame function:
WORKER printk_work_worker()
{
  for (;;)
  {
    switch(select())
      case dump_requested:
        printk_work_dump_handler(printk_work_count_max_kthread);
      case termination_requested:
        return;
  }
}

/* tasked_cpus = all_cpus; */
tasked_cpus = all_cpus / 4 */
for(tasked_cpus)
{
  new_kthread(printk_work_worker());
}



printk_impl()
{
  printk_queue_element(...);
  printk_work_dump_handler(printk_work_count_max_immediate);
}

Very Q&D thoughts, but might be helpful...


(ermm, however launching #cpus workers most likely is useless,
since schedulable decision-making does *not* depend on #cpus -
if any computing resource is available, this *can* be executed, thus
quite likely only one worker is needed anyway - no, two, since
one worker needs to be able to wake up precisely *one* other
to have things precisely continue on *another* computing resource!)

Oh, and this doesn't implement
(and especially not reliably/atomically race-window-less)
the case of
having another activity trigger another printk queuing
and thus (potentially - but not necessarily!!) another package dump activity
while some worker activity already is ongoing.
I.e. we've got a race window in the (multi-)use of printk_work_dump_handler()
(which should be solvable, though).

Oh, and rather than doing some specific work_count limit comparisons,
it might actually be more elegant instead to
bundle worker work packages in *advance*,
to achieve simple/isolated/separate submission to
whichever worker one would prefer; inner handling would then simply do:

work_handler()
{
  while (!my_package_done)
    dump_package_element();
}

But splitting into separate work packages bears the risk of
illegal reordering of printk payload elements, thus
most likely one should *not* do this and instead keep doing
simple work_count checks on a *global*/*shared* printk queue payload
(insertion submission of further elements into this queue
must still be possible at any time
without (excessive) blocking, though, of course).

while (can_continue_dumping)
{
payload_mutex_begin();
element = grab_element();
payload_mutex_end();
dump_element(element);
}



payload_mutex_begin();
queue_element(element);
payload_mutex_end();

HTH,

Andreas Mohr

  reply	other threads:[~2017-06-29  6:26 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-09  8:28 [RFC][PATCHv3 0/5] printk: introduce printing kernel thread Sergey Senozhatsky
2017-05-09  8:28 ` [RFC][PATCHv3 1/5] printk: move printk_pending out of per-cpu Sergey Senozhatsky
2017-05-25 12:11   ` Petr Mladek
2017-05-25 12:36     ` Sergey Senozhatsky
2017-05-25 12:43       ` Petr Mladek
2017-05-09  8:28 ` [RFC][PATCHv3 2/5] printk: introduce printing kernel thread Sergey Senozhatsky
2017-05-10  5:59   ` Sergey Senozhatsky
2017-05-29  9:29     ` Petr Mladek
2017-05-29 12:12       ` Jan Kara
2017-05-31  7:30         ` Sergey Senozhatsky
2017-05-31 21:44           ` Andreas Mohr
2017-06-01  7:21           ` Sergey Senozhatsky
2017-06-01  7:23             ` Sergey Senozhatsky
2017-06-01  9:20             ` Sergey Senozhatsky
2017-06-28 13:17             ` Petr Mladek
2017-06-29  7:40               ` Sergey Senozhatsky
2017-06-28 12:45           ` Petr Mladek
2017-05-31  7:22       ` Sergey Senozhatsky
2017-06-28 12:19         ` Petr Mladek
2017-06-29  6:26           ` Andreas Mohr [this message]
2017-06-29  7:00             ` Andreas Mohr
2017-06-29  7:33           ` Sergey Senozhatsky
2017-06-29 11:24             ` Tetsuo Handa
2017-06-30  7:01             ` Sergey Senozhatsky
2017-06-30 10:18               ` Tetsuo Handa
2017-06-30 11:57                 ` Sergey Senozhatsky
2017-06-30 12:35                   ` Tetsuo Handa
2017-06-30 12:50                     ` Sergey Senozhatsky
2017-06-30 13:16               ` Petr Mladek
2017-06-30 13:38                 ` Sergey Senozhatsky
2017-07-03 11:11                   ` Sergey Senozhatsky
2017-07-02 20:27                     ` Pavel Machek
2017-07-05 16:36                       ` Sergey Senozhatsky
2017-07-03 19:34                     ` Steven Rostedt
2017-07-04  5:26                       ` Sergey Senozhatsky
2017-07-04  6:53                         ` Sergey Senozhatsky
2017-06-30 14:10                 ` Sergey Senozhatsky
2017-06-30 11:54             ` Petr Mladek
2017-06-30 12:42               ` Sergey Senozhatsky
2017-06-30 13:33                 ` Steven Rostedt
2017-06-30 14:28                   ` Sergey Senozhatsky
2017-06-30 14:45                     ` Steven Rostedt
2017-07-01  1:50                       ` Sergey Senozhatsky
2017-07-04 14:03                 ` Petr Mladek
2017-07-05  5:02                   ` Sergey Senozhatsky
2017-05-09  8:28 ` [RFC][PATCHv3 3/5] printk: add enforce_emergency parameter Sergey Senozhatsky
2017-05-09  8:28 ` [RFC][PATCHv3 4/5] printk: enable printk offloading Sergey Senozhatsky
2017-05-09  8:28 ` [RFC][PATCHv3 5/5] printk: register PM notifier Sergey Senozhatsky
2017-05-30  9:55   ` Petr Mladek
2017-05-31  6:42     ` Sergey Senozhatsky
2017-06-27 14:45       ` Petr Mladek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170629062620.GA26558@rhlx01.hs-esslingen.de \
    --to=andi@lisas.de \
    --cc=akpm@linux-foundation.org \
    --cc=ebiederm@xmission.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jack@suse.cz \
    --cc=jslaby@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=peterz@infradead.org \
    --cc=pmladek@suse.com \
    --cc=rjw@rjwysocki.net \
    --cc=rostedt@goodmis.org \
    --cc=sergey.senozhatsky.work@gmail.com \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.