linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Petr Mladek <pmladek@suse.com>
To: Seth Forshee <sforshee@kernel.org>
Cc: Jason Wang <jasowang@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jiri Kosina <jikos@kernel.org>, Miroslav Benes <mbenes@suse.cz>,
	Joe Lawrence <joe.lawrence@redhat.com>,
	Josh Poimboeuf <jpoimboe@kernel.org>,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	netdev@vger.kernel.org, live-patching@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads
Date: Mon, 30 Jan 2023 10:55:28 +0100	[thread overview]
Message-ID: <Y9eUEGu+wC5dm0JI@alley> (raw)
In-Reply-To: <Y9PmZFBEwUBwV3s/@do-x1extreme>

On Fri 2023-01-27 08:57:40, Seth Forshee wrote:
> On Fri, Jan 27, 2023 at 12:19:03PM +0100, Petr Mladek wrote:
> > Could you please provide some more details about the test system?
> > Is there anything important to make it reproducible?
> > 
> > The following aspects come to my mind. It might require:
> > 
> >    + more workers running on the same system
> >    + have a dedicated CPU for the worker
> >    + livepatching the function called by work->fn()
> >    + running the same work again and again
> >    + huge and overloaded system
> 
> I'm isolating a CPU, starting a KVM guest with a virtio-net device, and
> setting the affinity of the vhost worker thread to only the isolated
> CPU. Thus the vhost-worker thread has a dedicated CPU, as you say. (I'll
> note that in real-world cases the systems have many CPUs, and while the
> vhost threads aren't each given a dedicated CPU, if the system load is
> light enough a thread can end up with exlusive use of a CPU).
> 
> Then all I do is run iperf between the guest and the host with several
> parallel streams. I seem to be hitting the limits of the guest vCPUs
> before the vhost thread is fully saturated, as this gets it to about 90%
> CPU utilization by the vhost thread.

Thanks for the info!

> > > > Honestly, kpatch's timeout 1 minute looks incredible low to me. Note
> > > > that the transition is tried only once per minute. It means that there
> > > > are "only" 60 attempts.
> > > > 
> > > > Just by chance, does it help you to increase the timeout, please?
> > > 
> > > To be honest my test setup reproduces the problem well enough to make
> > > KLP wait significant time due to vhost threads, but it seldom causes it
> > > to hit kpatch's timeout.
> > > 
> > > Our system management software will try to load a patch tens of times in
> > > a day, and we've seen real-world cases where patches couldn't load
> > > within kpatch's timeout for multiple days. But I don't have such an
> > > environment readily accessible for my own testing. I can try to refine
> > > my test case and see if I can get it to that point.
> > 
> > My understanding is that you try to load the patch repeatedly but
> > it always fails after the 1 minute timeout. It means that it always
> > starts from the beginning (no livepatched process).
> > 
> > Is there any chance to try it with a longer timeout, for example, one
> > hour? It should increase the chance if there are more problematic kthreads.
> 
> Yes, I can try it. But I think I already mentioned that we are somewhat
> limited by our system management software and how livepatch loading is
> currently implemented there. I'd need to consult with others about how
> long we could make the timeout, but 1 hour is definitely too long under
> our current system.

Another possibility is to do not wait at all. SUSE livepatch packages load
the livepatch module, remove not longer used livepatch modules and are
done with it.

Note that the module is loaded quickly. The transition is finished
asynchronously using workqueues.

Of course, there is a risk that the transition will never finish.
It would prevent loading any newer livepatch. But it might be handled
when the the newer livepatch is loaded. It might revert the pending
transition, ...

Of course, it would be great to make the transition more reliable.
It would be nice to add the hook into the scheduler as discussed
in another branch of this thread. But it might bring another problems,
for example, affect the system performance. Well, it probably can
be optimized or ratelimited.

Anyway, I wanted to say that there is a way to get rid of the timeout
completely.

Best Regards,
Petr

      reply	other threads:[~2023-01-30  9:56 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-20 22:12 [PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads Seth Forshee (DigitalOcean)
2023-01-20 22:12 ` [PATCH 1/2] livepatch: add an interface for safely switching kthreads Seth Forshee (DigitalOcean)
2023-01-20 22:12 ` [PATCH 2/2] vhost: check for pending livepatches from vhost worker kthreads Seth Forshee (DigitalOcean)
2023-01-24 14:17   ` Petr Mladek
2023-01-24 17:21     ` Seth Forshee
2023-01-25 11:34       ` Petr Mladek
2023-01-25 16:57         ` Seth Forshee
2023-01-26 11:16           ` Petr Mladek
2023-01-26 11:49             ` Petr Mladek
2023-01-22  8:34 ` [PATCH 0/2] vhost: improve livepatch switching for heavily loaded " Michael S. Tsirkin
2023-01-26 17:03 ` Petr Mladek
2023-01-26 21:12   ` Seth Forshee (DigitalOcean)
2023-01-27  4:43     ` Josh Poimboeuf
2023-01-27 10:37       ` Peter Zijlstra
2023-01-27 12:09         ` Petr Mladek
2023-01-27 14:37           ` Seth Forshee
2023-01-27 16:52         ` Josh Poimboeuf
2023-01-27 17:09           ` Josh Poimboeuf
2023-01-27 22:11             ` Josh Poimboeuf
2023-01-30 12:40               ` Peter Zijlstra
2023-01-30 17:50                 ` Seth Forshee
2023-01-30 18:18                 ` Josh Poimboeuf
2023-01-30 18:36                 ` Mark Rutland
2023-01-30 19:48                   ` Josh Poimboeuf
2023-01-31  1:53                     ` Song Liu
2023-01-31 10:22                     ` Mark Rutland
2023-01-31 16:38                       ` Josh Poimboeuf
2023-02-01 11:10                         ` Mark Rutland
2023-02-01 16:57                           ` Josh Poimboeuf
2023-02-01 17:11                             ` Mark Rutland
2023-01-30 19:59                 ` Josh Poimboeuf
2023-01-31 10:02                   ` Peter Zijlstra
2023-01-27 20:02         ` Seth Forshee
2023-01-27 11:19     ` Petr Mladek
2023-01-27 14:57       ` Seth Forshee
2023-01-30  9:55         ` Petr Mladek [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9eUEGu+wC5dm0JI@alley \
    --to=pmladek@suse.com \
    --cc=jasowang@redhat.com \
    --cc=jikos@kernel.org \
    --cc=joe.lawrence@redhat.com \
    --cc=jpoimboe@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=live-patching@vger.kernel.org \
    --cc=mbenes@suse.cz \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=sforshee@kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).