All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eugene Huang <eugeneh@nvidia.com>
To: Marc Zyngier <maz@kernel.org>
Cc: "kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>
Subject: RE: Timer delays in VM
Date: Tue, 8 Mar 2022 07:50:50 +0000	[thread overview]
Message-ID: <BYAPR12MB31928F0EECC9EDAEA01BA7F3D9099@BYAPR12MB3192.namprd12.prod.outlook.com> (raw)
In-Reply-To: <87v8wv14iq.wl-maz@kernel.org>

Hi Marc,

Looks like to effectively troubleshoot, we need to upgrade our guest kernel to the latest version 5.17. I have these questions.

1. Can you please point out the kernel source that programs the arm virtual timer?
2. is The Generic Timer section in ARM Architecture Reference Manual ARMv8 the correct reference that we should refer to?
3. From the kernel community point of view, which kernel in the guest is the better - the Ubuntu one or the upstream one?

Thanks,
Eugene

> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: Thursday, March 3, 2022 6:43 AM
> To: Eugene Huang <eugeneh@nvidia.com>
> Cc: kvmarm@lists.cs.columbia.edu
> Subject: Re: Timer delays in VM
> 
> External email: Use caution opening links or attachments
> 
> 
> On Thu, 03 Mar 2022 05:49:00 +0000,
> Eugene Huang <eugeneh@nvidia.com> wrote:
> >
> > <EH> We have the following further 1-to-1 mappings:
> > pcpu-20 - vcpu-0 is running your timer test, everything is fine
> > pcpu-21 - vcpu-1 starts some other workload, and this affects the
> > timer test on the other vcpu
> >
> > - Each vCPU thread is pinned to its individual pCPU on the host (vcpupin in
> libvirt).
> > - Each pCPU on which a vCPU thread runs is isolated on the host (isolcpus).
> > - Each vCPU that runs the workload is isolated in the guest VM (isolcpus).
> >
> > So we are pretty sure the workloads are separated.
> 
> Hmmm. Isolcpus certainly is something I never use. You may want to check
> whether this has an influence on what your test's behaviour. You may also
> want to post your full libvirt config, just in case someone spots an issue there
> (I won't, as I know next to nothing about libvirt).
> 
> >
> > >
> > > Also, please work out whether you exit because of a blocking WFI or
> > > WFE, as they are indicative of different guest behaviour.
> >
> > <EH> Will do. Somehow our current trace does not show this information.
> >
> > >
> > > > Since we pin that workload to its own vCPU, in theory, it should
> > > > not affect the timing of another vCPU.
> > >
> > > Why not? a vcpu is just a host thread, and if they share a physical
> > > CPU at some point, there is a knock-on effect.
> >
> > <EH> Again, because of vcpupin in libvirt, there is no sharing of a
> > pCPU among vCPUs. At least that is our configuration intention.
> 
> Which may or may not be what happens in reality. libvirt is largely opaque,
> and because you ask it to do something doesn't mean it happens the way
> you hope it does.
> 
> >
> > >
> > > > > You also don't mention what host kernel version you are running.
> > > > > In general, please try and reproduce the issue using the latest
> > > > > kernel version
> > > > > (5.16 at the moment). Please also indicate what HW you are using.
> > > >
> > > > <EH> Tried 5.15 and 5.4 kernels. Both have the issue. Do you think
> > > > 5.16 can make a difference? The HW is an Ampere Altra system.
> > >
> > > Unlikely. The Altra is a mostly sane system, as long as you make
> > > sure that VMs don't migrate across sockets (at which point it becomes
> laughably bad).
> > > Nothing to do with KVM though.
> >
> > <EH> Right, there is no migration of VMs.
> > I see kvm arm timer related code is very different between 5.4 and
> > 5.15/5.16.  Can we still use 5.4 for both the host and the guest?
> 
> That's your call. I've stopped looking at 5.4 a couple of minutes after it was
> released. If I'm going to look for something, that will be on top of upstream.
> 
>         M.
> 
> --
> Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-03-08 11:16 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-28 18:02 Timer delays in VM Eugene Huang
2022-02-28 21:02 ` Marc Zyngier
2022-03-01  9:06   ` Andrew Jones
2022-03-01 19:03   ` Eugene Huang
2022-03-02  2:27     ` Eugene Huang
2022-03-02  7:28     ` Marc Zyngier
2022-03-03  5:49       ` Eugene Huang
2022-03-03 14:42         ` Marc Zyngier
2022-03-08  7:50           ` Eugene Huang [this message]
2022-03-08  9:34             ` Marc Zyngier
  -- strict thread matches above, loose matches on Subject: below --
2022-02-28  6:22 Eugene Huang
2022-02-28 11:02 ` Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR12MB31928F0EECC9EDAEA01BA7F3D9099@BYAPR12MB3192.namprd12.prod.outlook.com \
    --to=eugeneh@nvidia.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=maz@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.