linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] KVM: selftests: fix races in dirty log test
@ 2021-04-17 14:36 Peter Xu
  2021-04-17 14:36 ` [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync Peter Xu
  2021-04-17 14:36 ` [PATCH v3 2/2] KVM: selftests: Wait for vcpu thread before signal setup Peter Xu
  0 siblings, 2 replies; 8+ messages in thread
From: Peter Xu @ 2021-04-17 14:36 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andrew Jones, peterx, Paolo Bonzini, Sean Christopherson

Please consider overtake the previous v2 [1] of this patch which is a single
patch, also please find more information in the commit message of each patch.

I kept the versioning since it solves the same problem, but mostly rewritten.

I've run a few hours of below workloads in parallel to test this patch:

  (1) while :; do taskset -c 1 ./dirty_log_test; done
  (2) taskset -c 1 bash -c "while :; do :; done"

Review comments are greatly welcomed.

Thanks,

[1] https://lore.kernel.org/lkml/20210413213641.23742-1-peterx@redhat.com/

Peter Xu (2):
  KVM: selftests: Sync data verify of dirty logging with guest sync
  KVM: selftests: Wait for vcpu thread before signal setup

 tools/testing/selftests/kvm/dirty_log_test.c | 68 +++++++++++++++++---
 1 file changed, 58 insertions(+), 10 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync
  2021-04-17 14:36 [PATCH v3 0/2] KVM: selftests: fix races in dirty log test Peter Xu
@ 2021-04-17 14:36 ` Peter Xu
  2021-04-18 12:43   ` Peter Xu
  2021-04-17 14:36 ` [PATCH v3 2/2] KVM: selftests: Wait for vcpu thread before signal setup Peter Xu
  1 sibling, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-04-17 14:36 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andrew Jones, peterx, Paolo Bonzini, Sean Christopherson

This fixes a bug that can trigger with e.g. "taskset -c 0 ./dirty_log_test" or
when the testing host is very busy.

A similar previous attempt is done [1] but that is not enough, the reason is
stated in the reply [2].

As a summary (partly quotting from [2]):

The problem is I think one guest memory write operation (of this specific test)
contains a few micro-steps when page is during kvm dirty tracking (here I'm
only considering write-protect rather than pml but pml should be similar at
least when the log buffer is full):

  (1) Guest read 'iteration' number into register, prepare to write, page fault
  (2) Set dirty bit in either dirty bitmap or dirty ring
  (3) Return to guest, data written

When we verify the data, we assumed that all these steps are "atomic", say,
when (1) happened for this page, we assume (2) & (3) must have happened.  We
had some trick to workaround "un-atomicity" of above three steps, as previous
version of this patch wanted to fix atomicity of step (2)+(3) by explicitly
letting the main thread wait for at least one vmenter of vcpu thread, which
should work.  However what I overlooked is probably that we still have race
when (1) and (2) can be interrupted.

One example calltrace when it could happen that we read an old interation, got
interrupted before even setting the dirty bit and flushing data:

    __schedule+1742
    __cond_resched+52
    __get_user_pages+530
    get_user_pages_unlocked+197
    hva_to_pfn+206
    try_async_pf+132
    direct_page_fault+320
    kvm_mmu_page_fault+103
    vmx_handle_exit+288
    vcpu_enter_guest+2460
    kvm_arch_vcpu_ioctl_run+325
    kvm_vcpu_ioctl+526
    __x64_sys_ioctl+131
    do_syscall_64+51
    entry_SYSCALL_64_after_hwframe+68

It means iteration number cached in vcpu register can be very old when dirty
bit set and data flushed.

So far I don't see an easy way to guarantee all steps 1-3 atomicity but to sync
at the GUEST_SYNC() point of guest code when we do verification of the dirty
bits as what this patch does.

[1] https://lore.kernel.org/lkml/20210413213641.23742-1-peterx@redhat.com/
[2] https://lore.kernel.org/lkml/20210417140956.GV4440@xz-x1/

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 60 ++++++++++++++++----
 1 file changed, 50 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index bb2752d78fe3..510884f0eab8 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -17,6 +17,7 @@
 #include <linux/bitmap.h>
 #include <linux/bitops.h>
 #include <asm/barrier.h>
+#include <linux/atomic.h>
 
 #include "kvm_util.h"
 #include "test_util.h"
@@ -137,12 +138,20 @@ static uint64_t host_clear_count;
 static uint64_t host_track_next_count;
 
 /* Whether dirty ring reset is requested, or finished */
-static sem_t dirty_ring_vcpu_stop;
-static sem_t dirty_ring_vcpu_cont;
+static sem_t sem_vcpu_stop;
+static sem_t sem_vcpu_cont;
+/*
+ * This is only set by main thread, and only cleared by vcpu thread.  It is
+ * used to request vcpu thread to stop at the next GUEST_SYNC, since GUEST_SYNC
+ * is the only place that we'll guarantee both "dirty bit" and "dirty data"
+ * will match.  E.g., SIG_IPI won't guarantee that (e.g., when vcpu interrupted
+ * after setting dirty bit but before data flushed).
+ */
+static atomic_t vcpu_sync_stop_requested;
 /*
  * This is updated by the vcpu thread to tell the host whether it's a
  * ring-full event.  It should only be read until a sem_wait() of
- * dirty_ring_vcpu_stop and before vcpu continues to run.
+ * sem_vcpu_stop and before vcpu continues to run.
  */
 static bool dirty_ring_vcpu_ring_full;
 /*
@@ -234,6 +243,17 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
 	kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages);
 }
 
+/* Should only be called after a GUEST_SYNC */
+static void vcpu_handle_sync_stop(void)
+{
+	if (atomic_read(&vcpu_sync_stop_requested)) {
+		/* It means main thread is sleeping waiting */
+		atomic_set(&vcpu_sync_stop_requested, false);
+		sem_post(&sem_vcpu_stop);
+		sem_wait_until(&sem_vcpu_cont);
+	}
+}
+
 static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
 {
 	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
@@ -244,6 +264,8 @@ static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
 	TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
 		    "Invalid guest sync status: exit_reason=%s\n",
 		    exit_reason_str(run->exit_reason));
+
+	vcpu_handle_sync_stop();
 }
 
 static bool dirty_ring_supported(void)
@@ -301,13 +323,13 @@ static void dirty_ring_wait_vcpu(void)
 {
 	/* This makes sure that hardware PML cache flushed */
 	vcpu_kick();
-	sem_wait_until(&dirty_ring_vcpu_stop);
+	sem_wait_until(&sem_vcpu_stop);
 }
 
 static void dirty_ring_continue_vcpu(void)
 {
 	pr_info("Notifying vcpu to continue\n");
-	sem_post(&dirty_ring_vcpu_cont);
+	sem_post(&sem_vcpu_cont);
 }
 
 static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot,
@@ -361,11 +383,11 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
 		/* Update the flag first before pause */
 		WRITE_ONCE(dirty_ring_vcpu_ring_full,
 			   run->exit_reason == KVM_EXIT_DIRTY_RING_FULL);
-		sem_post(&dirty_ring_vcpu_stop);
+		sem_post(&sem_vcpu_stop);
 		pr_info("vcpu stops because %s...\n",
 			dirty_ring_vcpu_ring_full ?
 			"dirty ring is full" : "vcpu is kicked out");
-		sem_wait_until(&dirty_ring_vcpu_cont);
+		sem_wait_until(&sem_vcpu_cont);
 		pr_info("vcpu continues now.\n");
 	} else {
 		TEST_ASSERT(false, "Invalid guest sync status: "
@@ -377,7 +399,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
 static void dirty_ring_before_vcpu_join(void)
 {
 	/* Kick another round of vcpu just to make sure it will quit */
-	sem_post(&dirty_ring_vcpu_cont);
+	sem_post(&sem_vcpu_cont);
 }
 
 struct log_mode {
@@ -768,7 +790,25 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		usleep(p->interval * 1000);
 		log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
 					     bmap, host_num_pages);
+
+		/*
+		 * See vcpu_sync_stop_requested definition for details on why
+		 * we need to stop vcpu when verify data.
+		 */
+		atomic_set(&vcpu_sync_stop_requested, true);
+		sem_wait_until(&sem_vcpu_stop);
+		/*
+		 * NOTE: for dirty ring, it's possible that we didn't stop at
+		 * GUEST_SYNC but instead we stopped because ring is full;
+		 * that's okay too because ring full means we're only missing
+		 * the flush of the last page, and since we handle dirty ring
+		 * last page specifically, so verify will still pass.
+		 */
+		assert(host_log_mode == LOG_MODE_DIRTY_RING ||
+		       atomic_read(&vcpu_sync_stop_requested) == false);
 		vm_dirty_log_verify(mode, bmap);
+		sem_post(&sem_vcpu_cont);
+
 		iteration++;
 		sync_global_to_guest(vm, iteration);
 	}
@@ -819,8 +859,8 @@ int main(int argc, char *argv[])
 	};
 	int opt, i;
 
-	sem_init(&dirty_ring_vcpu_stop, 0, 0);
-	sem_init(&dirty_ring_vcpu_cont, 0, 0);
+	sem_init(&sem_vcpu_stop, 0, 0);
+	sem_init(&sem_vcpu_cont, 0, 0);
 
 	guest_modes_append_default();
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/2] KVM: selftests: Wait for vcpu thread before signal setup
  2021-04-17 14:36 [PATCH v3 0/2] KVM: selftests: fix races in dirty log test Peter Xu
  2021-04-17 14:36 ` [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync Peter Xu
@ 2021-04-17 14:36 ` Peter Xu
  2021-04-20  8:14   ` Paolo Bonzini
  1 sibling, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-04-17 14:36 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Andrew Jones, peterx, Paolo Bonzini, Sean Christopherson

The main thread could start to send SIG_IPI at any time, even before signal
blocked on vcpu thread.  Reuse the sem_vcpu_stop to sync on that, so when
SIG_IPI is sent the signal will always land correctly as an -EINTR.

Without this patch, on very busy cores the dirty_log_test could fail directly
on receiving a SIG_USR1 without a handler (when vcpu runs far slower than main).

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 510884f0eab8..25230e799bc4 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -534,6 +534,12 @@ static void *vcpu_worker(void *data)
 	sigemptyset(sigset);
 	sigaddset(sigset, SIG_IPI);
 
+	/*
+	 * Tell the main thread that signals are setup already; let's borrow
+	 * sem_vcpu_stop even if it's not for it.
+	 */
+	sem_post(&sem_vcpu_stop);
+
 	guest_array = addr_gva2hva(vm, (vm_vaddr_t)random_array);
 
 	while (!READ_ONCE(host_quit)) {
@@ -785,6 +791,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
 
+	sem_wait_until(&sem_vcpu_stop);
+
 	while (iteration < p->iterations) {
 		/* Give the vcpu thread some time to dirty some pages */
 		usleep(p->interval * 1000);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync
  2021-04-17 14:36 ` [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync Peter Xu
@ 2021-04-18 12:43   ` Peter Xu
  2021-04-20  8:07     ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-04-18 12:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Andrew Jones, Paolo Bonzini, Sean Christopherson

On Sat, Apr 17, 2021 at 10:36:01AM -0400, Peter Xu wrote:
> This fixes a bug that can trigger with e.g. "taskset -c 0 ./dirty_log_test" or
> when the testing host is very busy.
> 
> A similar previous attempt is done [1] but that is not enough, the reason is
> stated in the reply [2].
> 
> As a summary (partly quotting from [2]):
> 
> The problem is I think one guest memory write operation (of this specific test)
> contains a few micro-steps when page is during kvm dirty tracking (here I'm
> only considering write-protect rather than pml but pml should be similar at
> least when the log buffer is full):
> 
>   (1) Guest read 'iteration' number into register, prepare to write, page fault
>   (2) Set dirty bit in either dirty bitmap or dirty ring
>   (3) Return to guest, data written
> 
> When we verify the data, we assumed that all these steps are "atomic", say,
> when (1) happened for this page, we assume (2) & (3) must have happened.  We
> had some trick to workaround "un-atomicity" of above three steps, as previous
> version of this patch wanted to fix atomicity of step (2)+(3) by explicitly
> letting the main thread wait for at least one vmenter of vcpu thread, which
> should work.  However what I overlooked is probably that we still have race
> when (1) and (2) can be interrupted.
> 
> One example calltrace when it could happen that we read an old interation, got
> interrupted before even setting the dirty bit and flushing data:
> 
>     __schedule+1742
>     __cond_resched+52
>     __get_user_pages+530
>     get_user_pages_unlocked+197
>     hva_to_pfn+206
>     try_async_pf+132
>     direct_page_fault+320
>     kvm_mmu_page_fault+103
>     vmx_handle_exit+288
>     vcpu_enter_guest+2460
>     kvm_arch_vcpu_ioctl_run+325
>     kvm_vcpu_ioctl+526
>     __x64_sys_ioctl+131
>     do_syscall_64+51
>     entry_SYSCALL_64_after_hwframe+68
> 
> It means iteration number cached in vcpu register can be very old when dirty
> bit set and data flushed.
> 
> So far I don't see an easy way to guarantee all steps 1-3 atomicity but to sync
> at the GUEST_SYNC() point of guest code when we do verification of the dirty
> bits as what this patch does.
> 
> [1] https://lore.kernel.org/lkml/20210413213641.23742-1-peterx@redhat.com/
> [2] https://lore.kernel.org/lkml/20210417140956.GV4440@xz-x1/
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tools/testing/selftests/kvm/dirty_log_test.c | 60 ++++++++++++++++----
>  1 file changed, 50 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index bb2752d78fe3..510884f0eab8 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -17,6 +17,7 @@
>  #include <linux/bitmap.h>
>  #include <linux/bitops.h>
>  #include <asm/barrier.h>
> +#include <linux/atomic.h>
>  
>  #include "kvm_util.h"
>  #include "test_util.h"
> @@ -137,12 +138,20 @@ static uint64_t host_clear_count;
>  static uint64_t host_track_next_count;
>  
>  /* Whether dirty ring reset is requested, or finished */
> -static sem_t dirty_ring_vcpu_stop;
> -static sem_t dirty_ring_vcpu_cont;
> +static sem_t sem_vcpu_stop;
> +static sem_t sem_vcpu_cont;
> +/*
> + * This is only set by main thread, and only cleared by vcpu thread.  It is
> + * used to request vcpu thread to stop at the next GUEST_SYNC, since GUEST_SYNC
> + * is the only place that we'll guarantee both "dirty bit" and "dirty data"
> + * will match.  E.g., SIG_IPI won't guarantee that (e.g., when vcpu interrupted
> + * after setting dirty bit but before data flushed).
> + */
> +static atomic_t vcpu_sync_stop_requested;
>  /*
>   * This is updated by the vcpu thread to tell the host whether it's a
>   * ring-full event.  It should only be read until a sem_wait() of
> - * dirty_ring_vcpu_stop and before vcpu continues to run.
> + * sem_vcpu_stop and before vcpu continues to run.
>   */
>  static bool dirty_ring_vcpu_ring_full;
>  /*
> @@ -234,6 +243,17 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
>  	kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages);
>  }
>  
> +/* Should only be called after a GUEST_SYNC */
> +static void vcpu_handle_sync_stop(void)
> +{
> +	if (atomic_read(&vcpu_sync_stop_requested)) {
> +		/* It means main thread is sleeping waiting */
> +		atomic_set(&vcpu_sync_stop_requested, false);
> +		sem_post(&sem_vcpu_stop);
> +		sem_wait_until(&sem_vcpu_cont);
> +	}
> +}
> +
>  static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>  {
>  	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
> @@ -244,6 +264,8 @@ static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>  	TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
>  		    "Invalid guest sync status: exit_reason=%s\n",
>  		    exit_reason_str(run->exit_reason));
> +
> +	vcpu_handle_sync_stop();
>  }
>  
>  static bool dirty_ring_supported(void)
> @@ -301,13 +323,13 @@ static void dirty_ring_wait_vcpu(void)
>  {
>  	/* This makes sure that hardware PML cache flushed */
>  	vcpu_kick();
> -	sem_wait_until(&dirty_ring_vcpu_stop);
> +	sem_wait_until(&sem_vcpu_stop);
>  }
>  
>  static void dirty_ring_continue_vcpu(void)
>  {
>  	pr_info("Notifying vcpu to continue\n");
> -	sem_post(&dirty_ring_vcpu_cont);
> +	sem_post(&sem_vcpu_cont);
>  }
>  
>  static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot,
> @@ -361,11 +383,11 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>  		/* Update the flag first before pause */
>  		WRITE_ONCE(dirty_ring_vcpu_ring_full,
>  			   run->exit_reason == KVM_EXIT_DIRTY_RING_FULL);
> -		sem_post(&dirty_ring_vcpu_stop);
> +		sem_post(&sem_vcpu_stop);
>  		pr_info("vcpu stops because %s...\n",
>  			dirty_ring_vcpu_ring_full ?
>  			"dirty ring is full" : "vcpu is kicked out");
> -		sem_wait_until(&dirty_ring_vcpu_cont);
> +		sem_wait_until(&sem_vcpu_cont);
>  		pr_info("vcpu continues now.\n");
>  	} else {
>  		TEST_ASSERT(false, "Invalid guest sync status: "
> @@ -377,7 +399,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>  static void dirty_ring_before_vcpu_join(void)
>  {
>  	/* Kick another round of vcpu just to make sure it will quit */
> -	sem_post(&dirty_ring_vcpu_cont);
> +	sem_post(&sem_vcpu_cont);
>  }
>  
>  struct log_mode {
> @@ -768,7 +790,25 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>  		usleep(p->interval * 1000);
>  		log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
>  					     bmap, host_num_pages);
> +
> +		/*
> +		 * See vcpu_sync_stop_requested definition for details on why
> +		 * we need to stop vcpu when verify data.
> +		 */
> +		atomic_set(&vcpu_sync_stop_requested, true);
> +		sem_wait_until(&sem_vcpu_stop);
> +		/*
> +		 * NOTE: for dirty ring, it's possible that we didn't stop at
> +		 * GUEST_SYNC but instead we stopped because ring is full;
> +		 * that's okay too because ring full means we're only missing
> +		 * the flush of the last page, and since we handle dirty ring
> +		 * last page specifically, so verify will still pass.
> +		 */
> +		assert(host_log_mode == LOG_MODE_DIRTY_RING ||
> +		       atomic_read(&vcpu_sync_stop_requested) == false);
>  		vm_dirty_log_verify(mode, bmap);
> +		sem_post(&sem_vcpu_cont);
> +
>  		iteration++;
>  		sync_global_to_guest(vm, iteration);
>  	}
> @@ -819,8 +859,8 @@ int main(int argc, char *argv[])
>  	};
>  	int opt, i;
>  
> -	sem_init(&dirty_ring_vcpu_stop, 0, 0);
> -	sem_init(&dirty_ring_vcpu_cont, 0, 0);
> +	sem_init(&sem_vcpu_stop, 0, 0);
> +	sem_init(&sem_vcpu_cont, 0, 0);
>  
>  	guest_modes_append_default();
>  
> -- 
> 2.26.2
> 

We'd also squash below into this patch (which I forgot):

----8<-----
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 25230e799bc4..d3050d1c2cd0 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -377,7 +377,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
        /* A ucall-sync or ring-full event is allowed */
        if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) {
                /* We should allow this to continue */
-               ;
+               vcpu_handle_sync_stop();
        } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL ||
                   (ret == -1 && err == EINTR)) {
                /* Update the flag first before pause */
----8<-----

That's my intention when I introduce vcpu_handle_sync_stop(), but forgot to
add...

This should not affect correctness of this patch as dirty ring test can always
trigger ring full event which achieve similar goals, but it'll allow dirty ring
test to quit even earlier before ring is full.

-- 
Peter Xu


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync
  2021-04-18 12:43   ` Peter Xu
@ 2021-04-20  8:07     ` Paolo Bonzini
  2021-04-20 13:10       ` Peter Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2021-04-20  8:07 UTC (permalink / raw)
  To: Peter Xu, linux-kernel, kvm; +Cc: Andrew Jones, Sean Christopherson

On 18/04/21 14:43, Peter Xu wrote:
> ----8<-----
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index 25230e799bc4..d3050d1c2cd0 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -377,7 +377,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>          /* A ucall-sync or ring-full event is allowed */
>          if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) {
>                  /* We should allow this to continue */
> -               ;
> +               vcpu_handle_sync_stop();
>          } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL ||
>                     (ret == -1 && err == EINTR)) {
>                  /* Update the flag first before pause */
> ----8<-----
> 
> That's my intention when I introduce vcpu_handle_sync_stop(), but forgot to
> add...

And possibly even this (untested though):

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index ffa4e2791926..918954f01cef 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -383,6 +383,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
  		/* Update the flag first before pause */
  		WRITE_ONCE(dirty_ring_vcpu_ring_full,
  			   run->exit_reason == KVM_EXIT_DIRTY_RING_FULL);
+		atomic_set(&vcpu_sync_stop_requested, false);
  		sem_post(&sem_vcpu_stop);
  		pr_info("vcpu stops because %s...\n",
  			dirty_ring_vcpu_ring_full ?
@@ -804,8 +805,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
  		 * the flush of the last page, and since we handle the last
  		 * page specially verification will succeed anyway.
  		 */
-		assert(host_log_mode == LOG_MODE_DIRTY_RING ||
-		       atomic_read(&vcpu_sync_stop_requested) == false);
+		assert(atomic_read(&vcpu_sync_stop_requested) == false);
  		vm_dirty_log_verify(mode, bmap);
  		sem_post(&sem_vcpu_cont);
  

You can submit all these as a separate patch.

Paolo


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] KVM: selftests: Wait for vcpu thread before signal setup
  2021-04-17 14:36 ` [PATCH v3 2/2] KVM: selftests: Wait for vcpu thread before signal setup Peter Xu
@ 2021-04-20  8:14   ` Paolo Bonzini
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2021-04-20  8:14 UTC (permalink / raw)
  To: Peter Xu, linux-kernel, kvm; +Cc: Andrew Jones, Sean Christopherson

On 17/04/21 16:36, Peter Xu wrote:
> The main thread could start to send SIG_IPI at any time, even before signal
> blocked on vcpu thread.  Reuse the sem_vcpu_stop to sync on that, so when
> SIG_IPI is sent the signal will always land correctly as an -EINTR.
> 
> Without this patch, on very busy cores the dirty_log_test could fail directly
> on receiving a SIG_USR1 without a handler (when vcpu runs far slower than main).
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

This should be a better way to do the same:

----------- 8< ------------
From: Paolo Bonzini <pbonzini@redhat.com>
Subject: [PATCH] KVM: selftests: Wait for vcpu thread before signal setup

The main thread could start to send SIG_IPI at any time, even before signal
blocked on vcpu thread.  Therefore, start the vcpu thread with the signal
blocked.

Without this patch, on very busy cores the dirty_log_test could fail directly
on receiving a SIGUSR1 without a handler (when vcpu runs far slower than main).

Reported-by: Peter Xu <peterx@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index ffa4e2791926..048973d50a45 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -527,9 +527,8 @@ static void *vcpu_worker(void *data)
  	 */
  	sigmask->len = 8;
  	pthread_sigmask(0, NULL, sigset);
+	sigdelset(sigset, SIG_IPI);
  	vcpu_ioctl(vm, VCPU_ID, KVM_SET_SIGNAL_MASK, sigmask);
-	sigaddset(sigset, SIG_IPI);
-	pthread_sigmask(SIG_BLOCK, sigset, NULL);
  
  	sigemptyset(sigset);
  	sigaddset(sigset, SIG_IPI);
@@ -858,6 +857,7 @@ int main(int argc, char *argv[])
  		.interval = TEST_HOST_LOOP_INTERVAL,
  	};
  	int opt, i;
+	sigset_t sigset;
  
  	sem_init(&sem_vcpu_stop, 0, 0);
  	sem_init(&sem_vcpu_cont, 0, 0);
@@ -916,6 +916,11 @@ int main(int argc, char *argv[])
  
  	srandom(time(0));
  
+	/* Ensure that vCPU threads start with SIG_IPI blocked.  */
+	sigemptyset(&sigset);
+	sigaddset(&sigset, SIG_IPI);
+	pthread_sigmask(SIG_BLOCK, sigset, NULL);
+
  	if (host_log_mode_option == LOG_MODE_ALL) {
  		/* Run each log mode */
  		for (i = 0; i < LOG_MODE_NUM; i++) {


> ---
>   tools/testing/selftests/kvm/dirty_log_test.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index 510884f0eab8..25230e799bc4 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -534,6 +534,12 @@ static void *vcpu_worker(void *data)
>   	sigemptyset(sigset);
>   	sigaddset(sigset, SIG_IPI);
>   
> +	/*
> +	 * Tell the main thread that signals are setup already; let's borrow
> +	 * sem_vcpu_stop even if it's not for it.
> +	 */
> +	sem_post(&sem_vcpu_stop);
> +
>   	guest_array = addr_gva2hva(vm, (vm_vaddr_t)random_array);
>   
>   	while (!READ_ONCE(host_quit)) {
> @@ -785,6 +791,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>   
>   	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
>   
> +	sem_wait_until(&sem_vcpu_stop);
> +
>   	while (iteration < p->iterations) {
>   		/* Give the vcpu thread some time to dirty some pages */
>   		usleep(p->interval * 1000);
> 


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync
  2021-04-20  8:07     ` Paolo Bonzini
@ 2021-04-20 13:10       ` Peter Xu
  2021-04-20 14:07         ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-04-20 13:10 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Andrew Jones, Sean Christopherson

On Tue, Apr 20, 2021 at 10:07:16AM +0200, Paolo Bonzini wrote:
> On 18/04/21 14:43, Peter Xu wrote:
> > ----8<-----
> > diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> > index 25230e799bc4..d3050d1c2cd0 100644
> > --- a/tools/testing/selftests/kvm/dirty_log_test.c
> > +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> > @@ -377,7 +377,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
> >          /* A ucall-sync or ring-full event is allowed */
> >          if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) {
> >                  /* We should allow this to continue */
> > -               ;
> > +               vcpu_handle_sync_stop();
> >          } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL ||
> >                     (ret == -1 && err == EINTR)) {
> >                  /* Update the flag first before pause */
> > ----8<-----
> > 
> > That's my intention when I introduce vcpu_handle_sync_stop(), but forgot to
> > add...
> 
> And possibly even this (untested though):
> 
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index ffa4e2791926..918954f01cef 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -383,6 +383,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>  		/* Update the flag first before pause */
>  		WRITE_ONCE(dirty_ring_vcpu_ring_full,
>  			   run->exit_reason == KVM_EXIT_DIRTY_RING_FULL);
> +		atomic_set(&vcpu_sync_stop_requested, false);
>  		sem_post(&sem_vcpu_stop);
>  		pr_info("vcpu stops because %s...\n",
>  			dirty_ring_vcpu_ring_full ?
> @@ -804,8 +805,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>  		 * the flush of the last page, and since we handle the last
>  		 * page specially verification will succeed anyway.
>  		 */
> -		assert(host_log_mode == LOG_MODE_DIRTY_RING ||
> -		       atomic_read(&vcpu_sync_stop_requested) == false);
> +		assert(atomic_read(&vcpu_sync_stop_requested) == false);
>  		vm_dirty_log_verify(mode, bmap);
>  		sem_post(&sem_vcpu_cont);
> 
> You can submit all these as a separate patch.

But it could race, then?

        main thread                 vcpu thread
        -----------                 -----------
                                  ring full
                                    vcpu_sync_stop_requested=0
                                    sem_post(&sem_vcpu_stop)
     vcpu_sync_stop_requested=1
     sem_wait(&sem_vcpu_stop)
     assert(vcpu_sync_stop_requested==0)   <----

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync
  2021-04-20 13:10       ` Peter Xu
@ 2021-04-20 14:07         ` Paolo Bonzini
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2021-04-20 14:07 UTC (permalink / raw)
  To: Peter Xu; +Cc: linux-kernel, kvm, Andrew Jones, Sean Christopherson

On 20/04/21 15:10, Peter Xu wrote:
> On Tue, Apr 20, 2021 at 10:07:16AM +0200, Paolo Bonzini wrote:
>> On 18/04/21 14:43, Peter Xu wrote:
>>> ----8<-----
>>> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
>>> index 25230e799bc4..d3050d1c2cd0 100644
>>> --- a/tools/testing/selftests/kvm/dirty_log_test.c
>>> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
>>> @@ -377,7 +377,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>>>           /* A ucall-sync or ring-full event is allowed */
>>>           if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) {
>>>                   /* We should allow this to continue */
>>> -               ;
>>> +               vcpu_handle_sync_stop();
>>>           } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL ||
>>>                      (ret == -1 && err == EINTR)) {
>>>                   /* Update the flag first before pause */
>>> ----8<-----
>>>
>>> That's my intention when I introduce vcpu_handle_sync_stop(), but forgot to
>>> add...
>>
>> And possibly even this (untested though):
>>
>> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
>> index ffa4e2791926..918954f01cef 100644
>> --- a/tools/testing/selftests/kvm/dirty_log_test.c
>> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
>> @@ -383,6 +383,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
>>   		/* Update the flag first before pause */
>>   		WRITE_ONCE(dirty_ring_vcpu_ring_full,
>>   			   run->exit_reason == KVM_EXIT_DIRTY_RING_FULL);
>> +		atomic_set(&vcpu_sync_stop_requested, false);
>>   		sem_post(&sem_vcpu_stop);
>>   		pr_info("vcpu stops because %s...\n",
>>   			dirty_ring_vcpu_ring_full ?
>> @@ -804,8 +805,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>>   		 * the flush of the last page, and since we handle the last
>>   		 * page specially verification will succeed anyway.
>>   		 */
>> -		assert(host_log_mode == LOG_MODE_DIRTY_RING ||
>> -		       atomic_read(&vcpu_sync_stop_requested) == false);
>> +		assert(atomic_read(&vcpu_sync_stop_requested) == false);
>>   		vm_dirty_log_verify(mode, bmap);
>>   		sem_post(&sem_vcpu_cont);
>>
>> You can submit all these as a separate patch.
> 
> But it could race, then?
> 
>          main thread                 vcpu thread
>          -----------                 -----------
>                                    ring full
>                                      vcpu_sync_stop_requested=0
>                                      sem_post(&sem_vcpu_stop)
>       vcpu_sync_stop_requested=1
>       sem_wait(&sem_vcpu_stop)
>       assert(vcpu_sync_stop_requested==0)   <----

Yes, it could indeed.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-04-20 14:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-17 14:36 [PATCH v3 0/2] KVM: selftests: fix races in dirty log test Peter Xu
2021-04-17 14:36 ` [PATCH v3 1/2] KVM: selftests: Sync data verify of dirty logging with guest sync Peter Xu
2021-04-18 12:43   ` Peter Xu
2021-04-20  8:07     ` Paolo Bonzini
2021-04-20 13:10       ` Peter Xu
2021-04-20 14:07         ` Paolo Bonzini
2021-04-17 14:36 ` [PATCH v3 2/2] KVM: selftests: Wait for vcpu thread before signal setup Peter Xu
2021-04-20  8:14   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).