live-patching.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Petr Mladek <pmladek@suse.com>
To: Vasily Gorbik <gor@linux.ibm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>,
	Jiri Kosina <jikos@kernel.org>, Miroslav Benes <mbenes@suse.cz>,
	Joe Lawrence <joe.lawrence@redhat.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Sumanth Korikkar <sumanthk@linux.ibm.com>,
	live-patching@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH] livepatch: Speed up transition retries
Date: Thu, 8 Jul 2021 12:35:24 +0200	[thread overview]
Message-ID: <YObU7HQ1vUAQzME3@alley> (raw)
In-Reply-To: <patch.git-3127eb42c636.your-ad-here.call-01625661963-ext-4010@work.hours>

On Wed 2021-07-07 14:49:41, Vasily Gorbik wrote:
> That's just a racy hack for now for demonstration purposes.
> 
> On a s390 system with large amount of cpus
> klp_try_complete_transition() often cannot be "complete" from the first
> attempt. klp_try_complete_transition() schedules itself as delayed work
> after a second delay. This accumulates to significant amount of time when
> there are large number of livepatching transitions.
> 
> This patch tries to minimize this delay to counting processes which still
> need to be transitioned and then scheduling
> klp_try_complete_transition() right away.
> 
> For s390 LPAR with 128 cpu this reduces livepatch kselftest run time
> from
> real    1m11.837s
> user    0m0.603s
> sys     0m10.940s
> 
> to
> real    0m14.550s
> user    0m0.420s
> sys     0m5.779s
> 
> And qa_test_klp run time from
> real    5m15.950s
> user    0m34.447s
> sys     15m11.345s
> 
> to
> real    3m51.987s
> user    0m27.074s
> sys     9m37.301s
> 
> Would smth like that be useful for production use cases?
> Any ideas how to approach that more gracefully?

Honestly, I do not see a real life use case for this, except maybe
speeding up a test suite.

The livepatch transition is more about reliability than about speed.
In the real life, a livepatch will be applied only once in a while.

We have spent weeks thinking about and discussing the consistency
model, code, and barriers to handle races correctly. Especially,
klp_update_patch_state() is a super-sensitive beast because it is
called without klp_lock. It might be pretty hard to synchronize
it with klp_reverse_transition() or klp_force_transition().

You would need to come up with a really convincing use case and
numbers to make it worth the effort.

Best Regards,
Petr


  reply	other threads:[~2021-07-08 10:35 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-07 12:49 [RFC PATCH] livepatch: Speed up transition retries Vasily Gorbik
2021-07-08 10:35 ` Petr Mladek [this message]
2021-07-08 13:19   ` Vasily Gorbik
2021-07-08 14:57     ` Petr Mladek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YObU7HQ1vUAQzME3@alley \
    --to=pmladek@suse.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=jikos@kernel.org \
    --cc=joe.lawrence@redhat.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=live-patching@vger.kernel.org \
    --cc=mbenes@suse.cz \
    --cc=sumanthk@linux.ibm.com \
    --cc=svens@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).