All of lore.kernel.org
 help / color / mirror / Atom feed
From: Leonardo Bras <leobras@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Leonardo Bras <leobras@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>,
	Neeraj Upadhyay <quic_neeraju@quicinc.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Boqun Feng <boqun.feng@gmail.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Zqiang <qiang.zhang1211@gmail.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [RFC PATCH v1 0/2] Avoid rcu_core() if CPU just left guest vcpu
Date: Fri,  3 May 2024 19:00:01 -0300	[thread overview]
Message-ID: <ZjVeYVQm1iU-y7JF@LeoBras> (raw)
In-Reply-To: <ZjVXVc2e_V8NiMy3@google.com>

On Fri, May 03, 2024 at 02:29:57PM -0700, Sean Christopherson wrote:
> On Fri, May 03, 2024, Leonardo Bras wrote:
> > > KVM can provide that information with much better precision, e.g. KVM
> > > knows when when it's in the core vCPU run loop.
> > 
> > That would not be enough.
> > I need to present the application/problem to make a point:
> > 
> > - There is multiple  isolated physical CPU (nohz_full) on which we want to 
> >   run KVM_RT vcpus, which will be running a real-time (low latency) task.
> > - This task should not miss deadlines (RT), so we test the VM to make sure 
> >   the maximum latency on a long run does not exceed the latency requirement
> > - This vcpu will run on SCHED_FIFO, but has to run on lower priority than
> >   rcuc, so we can avoid stalling other cpus.
> > - There may be some scenarios where the vcpu will go back to userspace
> >   (from KVM_RUN ioctl), and that does not mean it's good to interrupt the 
> >   this to run other stuff (like rcuc).
> >
> > Now, I understand it will cover most of our issues if we have a context 
> > tracking around the vcpu_run loop, since we can use that to decide not to 
> > run rcuc on the cpu if the interruption hapenned inside the loop.
> > 
> > But IIUC we can have a thread that "just got out of the loop" getting 
> > interrupted by the timer, and asked to run rcu_core which will be bad for 
> > latency.
> > 
> > I understand that the chance may be statistically low, but happening once 
> > may be enough to crush the latency numbers.
> > 
> > Now, I can't think on a place to put this context trackers in kvm code that 
> > would avoid the chance of rcuc running improperly, that's why the suggested 
> > timeout, even though its ugly.
> > 
> > About the false-positive, IIUC we could reduce it if we reset the per-cpu 
> > last_guest_exit on kvm_put.
> 
> Which then opens up the window that you're trying to avoid (IRQ arriving just
> after the vCPU is put, before the CPU exits to userspace).
> 
> If you want the "entry to guest is imminent" status to be preserved across an exit
> to userspace, then it seems liek the flag really should be a property of the task,
> not a property of the physical CPU.  Similar to how rcu_is_cpu_rrupt_from_idle()
> detects that an idle task was interrupted, that goal is to detect if a vCPU task
> was interrupted.
> 
> PF_VCPU is already "taken" for similar tracking, but if we want to track "this
> task will soon enter an extended quiescent state", I don't see any reason to make
> it specific to vCPU tasks.  Unless the kernel/KVM dynamically manages the flag,
> which as above will create windows for false negatives, the kernel needs to
> trust userspace to a certaine extent no matter what.  E.g. even if KVM sets a
> PF_xxx flag on the first KVM_RUN, nothing would prevent userspace from calling
> into KVM to get KVM to set the flag, and then doing something else entirely with
> the task.
> 
> So if we're comfortable relying on the 1 second timeout to guard against a
> misbehaving userspace, IMO we might as well fully rely on that guardrail.  I.e.
> add a generic PF_xxx flag (or whatever flag location is most appropriate) to let
> userspace communicate to the kernel that it's a real-time task that spends the
> overwhelming majority of its time in userspace or guest context, i.e. should be
> given extra leniency with respect to rcuc if the task happens to be interrupted
> while it's in kernel context.
> 


I think I understand what you propose here.

But I am not sure what would happen in this case:

- RT guest task calls short HLT
- Host schedule another kernel thread (other task)
- Timer interruption, rcu_pending will() check the task which is not set 
  with above flag.
- rcuc runs, introducing latency
- Goes back to previous kernel thread, finishes running with rcuc latency
- Goes back to vcpu thread

Isn't there any chance that, on an short guest HLT, the latency previously 
introduced by rcuc preempting another kernel thread gets to introduce a 
latency to the RT task running in the vcpu?

Thanks!
Leo



- 


  reply	other threads:[~2024-05-03 22:00 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-28 17:19 [RFC PATCH v1 0/2] Avoid rcu_core() if CPU just left guest vcpu Leonardo Bras
2024-03-28 17:19 ` [RFC PATCH v1 1/2] kvm: Implement guest_exit_last_time() Leonardo Bras
2024-03-28 17:19 ` [RFC PATCH v1 2/2] rcu: Ignore RCU in nohz_full cpus if it was running a guest recently Leonardo Bras
2024-04-01 15:52   ` Paul E. McKenney
2024-04-01 20:21 ` [RFC PATCH v1 0/2] Avoid rcu_core() if CPU just left guest vcpu Sean Christopherson
2024-04-05 13:45   ` Marcelo Tosatti
2024-04-05 14:42     ` Sean Christopherson
2024-04-06  0:03       ` Paul E. McKenney
2024-04-08 17:16         ` Sean Christopherson
2024-04-08 18:42           ` Paul E. McKenney
2024-04-08 20:06             ` Sean Christopherson
2024-04-08 21:02               ` Paul E. McKenney
2024-04-08 21:56                 ` Sean Christopherson
2024-04-08 22:35                   ` Paul E. McKenney
2024-04-08 23:06                     ` Sean Christopherson
2024-04-08 23:20                       ` Paul E. McKenney
2024-04-10  2:39           ` Marcelo Tosatti
2024-04-15 19:47           ` Marcelo Tosatti
2024-04-15 21:29             ` Sean Christopherson
2024-04-16 12:36               ` Marcelo Tosatti
2024-04-16 14:07                 ` Sean Christopherson
2024-04-17 16:14                   ` Marcelo Tosatti
2024-04-17 17:22                     ` Sean Christopherson
2024-05-03 20:44                       ` Leonardo Bras
2024-05-06 18:47                         ` Marcelo Tosatti
2024-05-07 18:05                           ` Sean Christopherson
2024-05-07 22:36                             ` Leonardo Bras
2024-05-03 18:42   ` Leonardo Bras
2024-05-03 19:09     ` Leonardo Bras
2024-05-03 21:29     ` Sean Christopherson
2024-05-03 22:00       ` Leonardo Bras [this message]
2024-05-03 22:00       ` Paul E. McKenney
2024-05-07 17:55         ` Sean Christopherson
2024-05-07 19:15           ` Paul E. McKenney
2024-05-07 21:00             ` Sean Christopherson
2024-05-07 21:37               ` Paul E. McKenney
2024-05-07 23:47                 ` Sean Christopherson
2024-05-08  0:08                   ` Sean Christopherson
2024-05-08  2:51                     ` Leonardo Bras
2024-05-08  3:22                       ` Paul E. McKenney
2024-05-08  6:19                         ` Leonardo Bras
2024-05-08 14:01                           ` Sean Christopherson
2024-05-09  3:32                             ` Paul E. McKenney
2024-05-09  8:16                               ` Leonardo Bras
2024-05-09 10:14                                 ` Leonardo Bras
2024-05-09 23:45                                   ` Paul E. McKenney
2024-05-10 16:06                                     ` Leonardo Bras
2024-05-10 16:21                                       ` Paul E. McKenney
2024-05-10 17:12                                         ` Leonardo Bras
2024-05-10 17:41                                           ` Paul E. McKenney
2024-05-10 19:50                                             ` Leonardo Bras
2024-05-10 21:15                                               ` Leonardo Bras
2024-05-10 21:38                                                 ` Paul E. McKenney
2024-05-09 22:41                                 ` Paul E. McKenney
2024-05-09 23:07                                   ` Leonardo Bras Soares Passos
2024-05-11  2:08                             ` Leonardo Bras
2024-05-08  3:20                     ` Paul E. McKenney
2024-05-08  4:04                       ` Paul E. McKenney
2024-05-08 14:36                         ` Paul E. McKenney
2024-05-08 15:35                       ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZjVeYVQm1iU-y7JF@LeoBras \
    --to=leobras@redhat.com \
    --cc=boqun.feng@gmail.com \
    --cc=frederic@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mtosatti@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=qiang.zhang1211@gmail.com \
    --cc=quic_neeraju@quicinc.com \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.