All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Ben Gardon <bgardon@google.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>,
	Peter Shier <pshier@google.com>,
	Peter Feiner <pfeiner@google.com>,
	Junaid Shahid <junaids@google.com>,
	Jim Mattson <jmattson@google.com>,
	Yulei Zhang <yulei.kernel@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>
Subject: Re: [PATCH 05/24] kvm: x86/mmu: Fix yielding in TDP MMU
Date: Wed, 20 Jan 2021 11:28:40 -0800	[thread overview]
Message-ID: <YAiEaB/t/o9JvRN4@google.com> (raw)
In-Reply-To: <20210112181041.356734-6-bgardon@google.com>

On Tue, Jan 12, 2021, Ben Gardon wrote:
> There are two problems with the way the TDP MMU yields in long running
> functions. 1.) Given certain conditions, the function may not yield
> reliably / frequently enough. 2.) In some functions the TDP iter risks
> not making forward progress if two threads livelock yielding to
> one another.
> 
> Case 1 is possible if for example, a paging structure was very large
> but had few, if any writable entries. wrprot_gfn_range could traverse many
> entries before finding a writable entry and yielding.
> 
> Case 2 is possible if two threads were trying to execute wrprot_gfn_range.
> Each could write protect an entry and then yield. This would reset the
> tdp_iter's walk over the paging structure and the loop would end up
> repeating the same entry over and over, preventing either thread from
> making forward progress.
> 
> Fix these issues by moving the yield to the beginning of the loop,
> before other checks and only yielding if the loop has made forward
> progress since the last yield.

I think it'd be best to split this into two patches, e.g. ensure forward
progress and then yield more agressively.  They are two separate bugs, and I
don't think that ensuring forward progress would exacerbate case #1.  I'm not
worried about breaking things so much as getting more helpful shortlogs; "Fix
yielding in TDP MMU" doesn't provide any insight into what exactly was broken.
E.g. something like:

  KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter
  KVM: x86/mmu: Yield in TDU MMU iter even if no real work was done

> Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
> Reviewed-by: Peter Feiner <pfeiner@google.com>
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 83 +++++++++++++++++++++++++++++++-------
>  1 file changed, 69 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index b2784514ca2d..1987da0da66e 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -470,9 +470,23 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
>  			  gfn_t start, gfn_t end, bool can_yield)
>  {
>  	struct tdp_iter iter;
> +	gfn_t last_goal_gfn = start;
>  	bool flush_needed = false;
>  
>  	tdp_root_for_each_pte(iter, root, start, end) {
> +		/* Ensure forward progress has been made before yielding. */
> +		if (can_yield && iter.goal_gfn != last_goal_gfn &&

Make last_goal_gfn a property of the iterator, that way all this logic can be
shoved into tdp_mmu_iter_flush_cond_resched(), and the comments about ensuring
forward progress and effectively invalidating/resetting the iterator (the
comment below) can be a function comment, as opposed to being copied everywhere.
E.g. there can be a big scary warning in the function comment stating that the
caller must restart its loop if the helper yielded.

Tangentially related, the name goal_gfn is quite confusing.  "goal" and "end"
are synonyms, but "goal" is often initialized with "start", and it's not used to
terminate the walk.  Maybe next_gfn instead?  And maybe yielded_gfn, since
last_next_gfn is pretty horrendous.

> +		    tdp_mmu_iter_flush_cond_resched(kvm, &iter)) {

This isn't quite correct, as tdp_mmu_iter_flush_cond_resched() will do an
expensive remote TLB flush on every yield, even if no flush is needed.  The
cleanest solution is likely to drop tdp_mmu_iter_flush_cond_resched() and
instead add a @flush param to tdp_mmu_iter_cond_resched().  If it's tagged
__always_inline, then the callers that unconditionally pass true/false will
optimize out the conditional code.

At that point, I think it would also make sense to fold tdp_iter_refresh_walk()
into tdp_mmu_iter_cond_resched(), because really we shouldn't be mucking with
the guts of the iter except for the yield case.

> +			last_goal_gfn = iter.goal_gfn;

Another argument for both renaming goal_gfn and moving last_*_gfn into the iter:
it's not at all obvious that updating the last gfn _after_ tdp_iter_refresh_walk()
is indeed correct.

You can also avoid a local variable by doing max(iter->next_gfn, iter->gfn) when
calling tdp_iter_refresh_walk().  IMO, that's also a bit easier to understand
than an open-coded equivalent.

E.g. putting it all together, with yielded_gfn set by tdp_iter_start():

static __always_inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
						     struct tdp_iter *iter,
						     bool flush)
{
	/* Ensure forward progress has been made since the last yield. */
	if (iter->next_gfn == iter->yielded_gfn)
		return false;

	if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
		if (flush)
			kvm_flush_remote_tlbs(kvm);
		cond_resched_lock(&kvm->mmu_lock);

		/*
		 * Restart the walk over the paging structure from the root,
		 * starting from the highest gfn the iterator had previously
		 * reached.  The entire paging structure, except the root, may
		 * have been completely torn down and rebuilt while we yielded.
		 */
		tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
			       iter->root_level, iter->min_level,
			       max(iter->next_gfn, iter->gfn));
		return true;
	}

	return false;
}

> +			flush_needed = false;
> +			/*
> +			 * Yielding caused the paging structure walk to be
> +			 * reset so skip to the next iteration to continue the
> +			 * walk from the root.
> +			 */
> +			continue;
> +		}
> +
>  		if (!is_shadow_present_pte(iter.old_spte))
>  			continue;
>  

  reply	other threads:[~2021-01-20 19:33 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-12 18:10 [PATCH 00/24] Allow parallel page faults with TDP MMU Ben Gardon
2021-01-12 18:10 ` [PATCH 01/24] locking/rwlocks: Add contention detection for rwlocks Ben Gardon
2021-01-12 21:20   ` kernel test robot
2021-01-12 18:10 ` [PATCH 02/24] sched: Add needbreak " Ben Gardon
2021-01-12 18:10 ` [PATCH 03/24] sched: Add cond_resched_rwlock Ben Gardon
2021-01-12 18:10 ` [PATCH 04/24] kvm: x86/mmu: change TDP MMU yield function returns to match cond_resched Ben Gardon
2021-01-20 18:38   ` Sean Christopherson
2021-01-21 20:22     ` Paolo Bonzini
2021-01-26 14:11     ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 05/24] kvm: x86/mmu: Fix yielding in TDP MMU Ben Gardon
2021-01-20 19:28   ` Sean Christopherson [this message]
2021-01-22  1:06     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 06/24] kvm: x86/mmu: Skip no-op changes in TDP MMU functions Ben Gardon
2021-01-20 19:51   ` Sean Christopherson
2021-01-25 23:51     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 07/24] kvm: x86/mmu: Add comment on __tdp_mmu_set_spte Ben Gardon
2021-01-26 14:13   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 08/24] kvm: x86/mmu: Add lockdep when setting a TDP MMU SPTE Ben Gardon
2021-01-20 19:58   ` Sean Christopherson
2021-01-26 14:13   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 09/24] kvm: x86/mmu: Don't redundantly clear TDP MMU pt memory Ben Gardon
2021-01-20 20:06   ` Sean Christopherson
2021-01-26 14:14   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 10/24] kvm: x86/mmu: Factor out handle disconnected pt Ben Gardon
2021-01-20 20:30   ` Sean Christopherson
2021-01-26 14:14   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 11/24] kvm: x86/mmu: Put TDP MMU PT walks in RCU read-critical section Ben Gardon
2021-01-20 22:19   ` Sean Christopherson
2021-01-12 18:10 ` [PATCH 12/24] kvm: x86/kvm: RCU dereference tdp mmu page table links Ben Gardon
2021-01-22 18:32   ` Sean Christopherson
2021-01-26 18:17     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 13/24] kvm: x86/mmu: Only free tdp_mmu pages after a grace period Ben Gardon
2021-01-12 18:10 ` [PATCH 14/24] kvm: mmu: Wrap mmu_lock lock / unlock in a function Ben Gardon
2021-01-13  2:35   ` kernel test robot
2021-01-13  2:35     ` kernel test robot
2021-01-12 18:10 ` [PATCH 15/24] kvm: mmu: Wrap mmu_lock cond_resched and needbreak Ben Gardon
2021-01-21  0:19   ` Sean Christopherson
2021-01-21 20:17     ` Paolo Bonzini
2021-01-26 14:38     ` Paolo Bonzini
2021-01-26 17:47       ` Ben Gardon
2021-01-26 17:55         ` Paolo Bonzini
2021-01-26 18:11           ` Ben Gardon
2021-01-26 20:47             ` Paolo Bonzini
2021-01-27 20:08               ` Ben Gardon
2021-01-27 20:55                 ` Paolo Bonzini
2021-01-27 21:20                   ` Ben Gardon
2021-01-28  8:18                     ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 16/24] kvm: mmu: Wrap mmu_lock assertions Ben Gardon
2021-01-26 14:29   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 17/24] kvm: mmu: Move mmu_lock to struct kvm_arch Ben Gardon
2021-01-12 18:10 ` [PATCH 18/24] kvm: x86/mmu: Use an rwlock for the x86 TDP MMU Ben Gardon
2021-01-21  0:45   ` Sean Christopherson
2021-01-12 18:10 ` [PATCH 19/24] kvm: x86/mmu: Protect tdp_mmu_pages with a lock Ben Gardon
2021-01-21 19:22   ` Sean Christopherson
2021-01-21 21:32     ` Sean Christopherson
2021-01-26 14:27       ` Paolo Bonzini
2021-01-26 21:47         ` Ben Gardon
2021-01-26 22:02         ` Sean Christopherson
2021-01-26 22:09           ` Sean Christopherson
2021-01-27 12:40           ` Paolo Bonzini
2021-01-26 13:37   ` Paolo Bonzini
2021-01-26 21:07     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 20/24] kvm: x86/mmu: Add atomic option for setting SPTEs Ben Gardon
2021-01-13  0:05   ` kernel test robot
2021-01-13  0:05     ` kernel test robot
2021-01-26 14:21   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 21/24] kvm: x86/mmu: Use atomic ops to set SPTEs in TDP MMU map Ben Gardon
2021-01-12 18:10 ` [PATCH 22/24] kvm: x86/mmu: Flush TLBs after zap in TDP MMU PF handler Ben Gardon
2021-01-21  0:05   ` Sean Christopherson
2021-01-12 18:10 ` [PATCH 23/24] kvm: x86/mmu: Freeze SPTEs in disconnected pages Ben Gardon
2021-01-12 18:10 ` [PATCH 24/24] kvm: x86/mmu: Allow parallel page faults for the TDP MMU Ben Gardon
2021-01-21  0:55   ` Sean Christopherson
2021-01-26 21:57     ` Ben Gardon
2021-01-27 17:14       ` Sean Christopherson
2021-01-26 13:37   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YAiEaB/t/o9JvRN4@google.com \
    --to=seanjc@google.com \
    --cc=bgardon@google.com \
    --cc=jmattson@google.com \
    --cc=junaids@google.com \
    --cc=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    --cc=pshier@google.com \
    --cc=vkuznets@redhat.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.