kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Andy Lutomirski <luto@amacapital.net>,
	Peter Zijlstra <peterz@infradead.org>,
	Andy Lutomirski <luto@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, X86 ML <x86@kernel.org>,
	kvm list <kvm@vger.kernel.org>, stable <stable@vger.kernel.org>
Subject: Re: [PATCH v2] x86/kvm: Disable KVM_ASYNC_PF_SEND_ALWAYS
Date: Thu, 09 Apr 2020 01:06:51 +0200	[thread overview]
Message-ID: <875ze9r304.fsf@nanos.tec.linutronix.de> (raw)
In-Reply-To: <20200408203425.GD93547@redhat.com>

Vivek Goyal <vgoyal@redhat.com> writes:
> On Wed, Apr 08, 2020 at 08:01:36PM +0200, Thomas Gleixner wrote:
>> Forget the current pf async semantics (or the lack of). You really want
>> to start from scratch and igore the whole thing.
>> 
>> The charm of #VE is that the hardware can inject it and it's not nesting
>> until the guest cleared the second word in the VE information area. If
>> that word is not 0 then you get a regular vmexit where you suspend the
>> vcpu until the nested problem is solved.
>
> So IIUC, only one process on a vcpu could affort to relinquish cpu to
> another task. If next task also triggers EPT violation, that will result
> in VM exit (as previous #VE is not complete yet) and vcpu will be
> halted.

No. The guest #VE handler reads VE information, stores it into a PV page
which is used to communicate with the host, invokes the hypercall and
clears word1 so a consecutive #VE can be handled.

If the hypercall is telling the host to suspend the guest (e.g. because
the exception hit an interrupt or preemption disabled region where
scheduling is not possible) then the host suspends the vCPU until the
EPT issue is fixed. Before that hypercall returns the state of the
recursion prevention word is irrelevant as the vCPU is not running, so
it's fine to clear it afterwards.

If the hypercall is telling the host that it can schedule and do
something else, then the word is cleared after the hypercall
returns. This makes sure that the host has gathered the information
before another VE can be injected.

TBH, I really was positively surprised that this HW mechanism actually
makes sense. It prevents the following situation:

  1) Guest triggers a EPT fail

  2) HW injects #VE sets VE info

  3) Guest handles #VE and before being able to gather the VE info data
     it triggers another EPT fail

  4) HW injects #VE sets VE info -> FAIL

So if we clear the reentrancy protection after saving the info in the
PV page and after the hypercall returns, it's guaranteed that the above
results in:

  1) Guest triggers a EPT fail

  2) HW injects #VE sets VE info

  3) Guest handles #VE and before being able to gather the VE info data
     and clearing the reentrancy protection it triggers another EPT fail

  4) VMEXIT which needs to be handled by suspending the vCPU, solving
     the issue and resuming it, which allows it to handle the original
     fail #1

>> So you really don't worry about the guest CPU state at all. The guest
>> side #VE handler has to decide what it wants from the host depending on
>> it's internal state:
>> 
>>      - Suspend me and resume once the EPT fail is solved
>> 
>>      - Let me park the failing task and tell me once you resolved the
>>        problem.
>> 
>> That's pretty straight forward and avoids the whole nonsense which the
>> current mess contains. It completely avoids the allocation stuff as well
>> as you need to use a PV page where the guest copies the VE information
>> to.
>> 
>> The notification that a problem has been resolved needs to go through a
>> separate vector which still has the IF=1 requirement obviously.
>
> How is this vector decided between guest and host.

That's either a fixed vector which then becomes ABI or the guest tells
the host via some PV/hypercall interface that it can handle #VE and that
the vector for notification is number X.

> Failure to fault in page will be communicated through same
> vector?

The PV page which communicates the current and eventually pending EPT
fails to the host is also the communication mechanism for the outcome.

Lets assume that the PV page contains an array of guest/host
communication data structures:

struct ve_info {
	struct ve_hw_info	hw_info;
        unsigned long		state;
        struct rcu_wait         wait;
);

The PV page looks like this:

struct ve_page {
	struct ve_info		info[N_ENTRIES];
        unsigned int		guest_current;
        unsigned int		host_current;
        unsigned long		active[BITS_TO_LONGS(N_ENTRIES)];
};

The #VE handler does:

        struct ve_page *vp = this_cpu_ptr(&ve_page);
        struct ve_info *info;
        bool can_continue;

        idx = find_first_zero_bit(vp->active, N_ENTRIES);
        BUG_ON(idx >= N_ENTRIES);
        set_bit(idx, vp->active);
        info = vp->info + idx;

        copy_ve_info(&info->hw_info, ve_hwinfo);
        vp->guest_current = idx;

        if (test_and_set_thread_flag(TIF_IN_VE) || bitmap_full(vp->active, N_ENTRIES))
                can_continue = false;
        else
                can_continue = user_mode(regs) || preemptible();

        if (can_continue) {
                info->state = CAN_CONTINUE;
        	hypercall_ve();
                ve_hwinfo.reentrancy_protect = 0;
                rcuwait_wait_event(&info->wait, info->state != CAN_CONTINUE, TASK_IDLE);
        } else {        
                info->state = SUSPEND_ME;
        	hypercall_ve();
                ve_hwinfo.reentrancy_protect = 0;
	}

	state = info->state;
        info->state = NONE;
        clear_bit(idx, vp->active);

        switch (state) {
        case RESOLVED:
        	break;

        case FAIL:
        	if (user_mode(regs))
                	send_signal(.....);
                else if (!fixup_exception())
                	panic("I'm toast");
                break;

        default:
        	panic("Confused");
        }
        
        clear_thread_flag(TIF_IN_VE);
        
Now the host completion does:

        struct ve_page *vp = vcpu->ve_page;
        struct ve_info *info = vp->info + idx;

        state = info->state;
        info->state = resolved_state;

        switch (state) {
        case SUSPEND_ME:
        	resume_vcpu(vcpu);
                break;
        case CAN_CONTINUE:
        	queue_completion(vcpu, idx);
                break;
        default:
                kill_guest();
        }

and the host completion injection which handles the queued completion
when guest IF=0 does:

        struct ve_page *vp = vcpu->ve_page;

        vp->host_current = idx;
        inject_ve_complete(vcpu);

The guest completion does:

        struct ve_page *vp = this_cpu_ptr(&ve_page);
        struct ve_info *info;

        info = vp->info + vp->host_current;
        rcuwait_wake_up(&info->wait);

There are a bunch of life time issues to think about (not rocket
science), but you get the idea.

Thanks,

        tglx

  reply	other threads:[~2020-04-08 23:07 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-07  2:26 [PATCH v2] x86/kvm: Disable KVM_ASYNC_PF_SEND_ALWAYS Andy Lutomirski
2020-03-07 15:03 ` Andy Lutomirski
2020-03-07 15:47   ` Thomas Gleixner
2020-03-07 15:59     ` Andy Lutomirski
2020-03-07 19:01       ` Thomas Gleixner
2020-03-07 19:34         ` Andy Lutomirski
2020-03-08  7:23         ` Thomas Gleixner
2020-03-09  6:57           ` Thomas Gleixner
2020-03-09  8:40             ` Paolo Bonzini
2020-03-09  9:09               ` Thomas Gleixner
2020-03-09 18:14                 ` Andy Lutomirski
2020-03-09 19:05                   ` Thomas Gleixner
2020-03-09 20:22                     ` Peter Zijlstra
2020-04-06 19:09                       ` Vivek Goyal
2020-04-06 20:25                         ` Peter Zijlstra
2020-04-06 20:32                           ` Andy Lutomirski
2020-04-06 20:42                             ` Andy Lutomirski
2020-04-07 17:21                               ` Vivek Goyal
2020-04-07 17:38                                 ` Andy Lutomirski
2020-04-07 20:20                                   ` Thomas Gleixner
2020-04-07 21:41                                     ` Andy Lutomirski
2020-04-07 22:07                                       ` Paolo Bonzini
2020-04-07 22:29                                         ` Andy Lutomirski
2020-04-08  0:30                                           ` Paolo Bonzini
2020-05-21 15:55                                         ` Vivek Goyal
2020-04-07 22:48                                       ` Thomas Gleixner
2020-04-08  4:48                                         ` Andy Lutomirski
2020-04-08  9:32                                           ` Borislav Petkov
2020-04-08 10:12                                           ` Thomas Gleixner
2020-04-08 18:23                                           ` Vivek Goyal
2020-04-07 22:49                                       ` Vivek Goyal
2020-04-08 10:01                                         ` Borislav Petkov
2020-04-07 22:04                                     ` Paolo Bonzini
2020-04-07 23:21                                       ` Thomas Gleixner
2020-04-08  8:23                                         ` Paolo Bonzini
2020-04-08 13:01                                           ` Thomas Gleixner
2020-04-08 15:38                                             ` Peter Zijlstra
2020-04-08 16:41                                               ` Thomas Gleixner
2020-04-09  9:03                                             ` Paolo Bonzini
2020-04-08 15:34                                           ` Sean Christopherson
2020-04-08 16:50                                             ` Paolo Bonzini
2020-04-08 18:01                                               ` Thomas Gleixner
2020-04-08 20:34                                                 ` Vivek Goyal
2020-04-08 23:06                                                   ` Thomas Gleixner [this message]
2020-04-08 23:14                                                     ` Thomas Gleixner
2020-04-09  4:50                                                 ` Andy Lutomirski
2020-04-09  9:43                                                   ` Paolo Bonzini
2020-04-09 11:36                                                   ` Andrew Cooper
2020-04-09 12:47                                                   ` Paolo Bonzini
2020-04-09 14:13                                                     ` Andrew Cooper
2020-04-09 14:32                                                       ` Paolo Bonzini
2020-04-09 15:03                                                         ` Andy Lutomirski
2020-04-09 15:17                                                           ` Paolo Bonzini
2020-04-09 17:32                                                             ` Andy Lutomirski
2020-04-06 21:32                         ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=875ze9r304.fsf@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=luto@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sean.j.christopherson@intel.com \
    --cc=stable@vger.kernel.org \
    --cc=vgoyal@redhat.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).