All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@linux.vnet.ibm.com>
To: Peter Xu <peterx@redhat.com>
Cc: linux-kernel@vger.kernel.org, Shuah Khan <shuah@kernel.org>,
	Jerome Glisse <jglisse@redhat.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	linux-mm@kvack.org, Zi Yan <zi.yan@cs.rutgers.edu>,
	"Kirill A . Shutemov" <kirill@shutemov.name>,
	linux-kselftest@vger.kernel.org, Shaohua Li <shli@fb.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 3/3] userfaultfd: selftest: recycle lock threads first
Date: Sat, 29 Sep 2018 13:32:52 +0300	[thread overview]
Message-ID: <20180929103252.GC6429@rapoport-lnx> (raw)
In-Reply-To: <20180929084311.15600-4-peterx@redhat.com>

On Sat, Sep 29, 2018 at 04:43:11PM +0800, Peter Xu wrote:
> Now we recycle the uffd servicing threads earlier than the lock
> threads.  It might happen that when the lock thread is still blocked at
> a pthread mutex lock while the servicing thread has already quitted for
> the cpu so the lock thread will be blocked forever and hang the test
> program.  To fix the possible race, recycle the lock threads first.
> 
> This never happens with current missing-only tests, but when I start to
> run the write-protection tests (the feature is not yet posted upstream)
> it happens every time of the run possibly because in that new test we'll
> need to service two page faults for each lock operation.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

Acked-by: Mike Rapoport <rppt@linux.vnt.ibm.com>

> ---
>  tools/testing/selftests/vm/userfaultfd.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
> index f79706f13ce7..a388675b15af 100644
> --- a/tools/testing/selftests/vm/userfaultfd.c
> +++ b/tools/testing/selftests/vm/userfaultfd.c
> @@ -623,6 +623,12 @@ static int stress(unsigned long *userfaults)
>  	if (uffd_test_ops->release_pages(area_src))
>  		return 1;
> 
> +
> +	finished = 1;
> +	for (cpu = 0; cpu < nr_cpus; cpu++)
> +		if (pthread_join(locking_threads[cpu], NULL))
> +			return 1;
> +
>  	for (cpu = 0; cpu < nr_cpus; cpu++) {
>  		char c;
>  		if (bounces & BOUNCE_POLL) {
> @@ -640,11 +646,6 @@ static int stress(unsigned long *userfaults)
>  		}
>  	}
> 
> -	finished = 1;
> -	for (cpu = 0; cpu < nr_cpus; cpu++)
> -		if (pthread_join(locking_threads[cpu], NULL))
> -			return 1;
> -
>  	return 0;
>  }
> 
> -- 
> 2.17.1
> 

-- 
Sincerely yours,
Mike.


WARNING: multiple messages have this Message-ID (diff)
From: rppt at linux.vnet.ibm.com (Mike Rapoport)
Subject: [PATCH 3/3] userfaultfd: selftest: recycle lock threads first
Date: Sat, 29 Sep 2018 13:32:52 +0300	[thread overview]
Message-ID: <20180929103252.GC6429@rapoport-lnx> (raw)
In-Reply-To: <20180929084311.15600-4-peterx@redhat.com>

On Sat, Sep 29, 2018 at 04:43:11PM +0800, Peter Xu wrote:
> Now we recycle the uffd servicing threads earlier than the lock
> threads.  It might happen that when the lock thread is still blocked at
> a pthread mutex lock while the servicing thread has already quitted for
> the cpu so the lock thread will be blocked forever and hang the test
> program.  To fix the possible race, recycle the lock threads first.
> 
> This never happens with current missing-only tests, but when I start to
> run the write-protection tests (the feature is not yet posted upstream)
> it happens every time of the run possibly because in that new test we'll
> need to service two page faults for each lock operation.
> 
> Signed-off-by: Peter Xu <peterx at redhat.com>

Acked-by: Mike Rapoport <rppt at linux.vnt.ibm.com>

> ---
>  tools/testing/selftests/vm/userfaultfd.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
> index f79706f13ce7..a388675b15af 100644
> --- a/tools/testing/selftests/vm/userfaultfd.c
> +++ b/tools/testing/selftests/vm/userfaultfd.c
> @@ -623,6 +623,12 @@ static int stress(unsigned long *userfaults)
>  	if (uffd_test_ops->release_pages(area_src))
>  		return 1;
> 
> +
> +	finished = 1;
> +	for (cpu = 0; cpu < nr_cpus; cpu++)
> +		if (pthread_join(locking_threads[cpu], NULL))
> +			return 1;
> +
>  	for (cpu = 0; cpu < nr_cpus; cpu++) {
>  		char c;
>  		if (bounces & BOUNCE_POLL) {
> @@ -640,11 +646,6 @@ static int stress(unsigned long *userfaults)
>  		}
>  	}
> 
> -	finished = 1;
> -	for (cpu = 0; cpu < nr_cpus; cpu++)
> -		if (pthread_join(locking_threads[cpu], NULL))
> -			return 1;
> -
>  	return 0;
>  }
> 
> -- 
> 2.17.1
> 

-- 
Sincerely yours,
Mike.

WARNING: multiple messages have this Message-ID (diff)
From: rppt@linux.vnet.ibm.com (Mike Rapoport)
Subject: [PATCH 3/3] userfaultfd: selftest: recycle lock threads first
Date: Sat, 29 Sep 2018 13:32:52 +0300	[thread overview]
Message-ID: <20180929103252.GC6429@rapoport-lnx> (raw)
Message-ID: <20180929103252.Y3culv9mLSb5-ZU4j5AwCMlxvyLl0w5amlH4JzC8HDo@z> (raw)
In-Reply-To: <20180929084311.15600-4-peterx@redhat.com>

On Sat, Sep 29, 2018@04:43:11PM +0800, Peter Xu wrote:
> Now we recycle the uffd servicing threads earlier than the lock
> threads.  It might happen that when the lock thread is still blocked at
> a pthread mutex lock while the servicing thread has already quitted for
> the cpu so the lock thread will be blocked forever and hang the test
> program.  To fix the possible race, recycle the lock threads first.
> 
> This never happens with current missing-only tests, but when I start to
> run the write-protection tests (the feature is not yet posted upstream)
> it happens every time of the run possibly because in that new test we'll
> need to service two page faults for each lock operation.
> 
> Signed-off-by: Peter Xu <peterx at redhat.com>

Acked-by: Mike Rapoport <rppt at linux.vnt.ibm.com>

> ---
>  tools/testing/selftests/vm/userfaultfd.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
> index f79706f13ce7..a388675b15af 100644
> --- a/tools/testing/selftests/vm/userfaultfd.c
> +++ b/tools/testing/selftests/vm/userfaultfd.c
> @@ -623,6 +623,12 @@ static int stress(unsigned long *userfaults)
>  	if (uffd_test_ops->release_pages(area_src))
>  		return 1;
> 
> +
> +	finished = 1;
> +	for (cpu = 0; cpu < nr_cpus; cpu++)
> +		if (pthread_join(locking_threads[cpu], NULL))
> +			return 1;
> +
>  	for (cpu = 0; cpu < nr_cpus; cpu++) {
>  		char c;
>  		if (bounces & BOUNCE_POLL) {
> @@ -640,11 +646,6 @@ static int stress(unsigned long *userfaults)
>  		}
>  	}
> 
> -	finished = 1;
> -	for (cpu = 0; cpu < nr_cpus; cpu++)
> -		if (pthread_join(locking_threads[cpu], NULL))
> -			return 1;
> -
>  	return 0;
>  }
> 
> -- 
> 2.17.1
> 

-- 
Sincerely yours,
Mike.

  reply	other threads:[~2018-09-29 10:33 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-29  8:43 [PATCH 0/3] userfaultfd: selftests: cleanups and trivial fixes Peter Xu
2018-09-29  8:43 ` Peter Xu
2018-09-29  8:43 ` peterx
2018-09-29  8:43 ` [PATCH 1/3] userfaultfd: selftest: cleanup help messages Peter Xu
2018-09-29  8:43   ` Peter Xu
2018-09-29  8:43   ` peterx
2018-09-29 10:28   ` Mike Rapoport
2018-09-29 10:28     ` Mike Rapoport
2018-09-29 10:28     ` rppt
2018-09-30  6:34     ` Peter Xu
2018-09-30  6:34       ` Peter Xu
2018-09-30  6:34       ` peterx
2018-09-29  8:43 ` [PATCH 2/3] userfaultfd: selftest: generalize read and poll Peter Xu
2018-09-29  8:43   ` Peter Xu
2018-09-29  8:43   ` peterx
2018-09-29 10:31   ` Mike Rapoport
2018-09-29 10:31     ` Mike Rapoport
2018-09-29 10:31     ` rppt
2018-09-29  8:43 ` [PATCH 3/3] userfaultfd: selftest: recycle lock threads first Peter Xu
2018-09-29  8:43   ` Peter Xu
2018-09-29  8:43   ` peterx
2018-09-29 10:32   ` Mike Rapoport [this message]
2018-09-29 10:32     ` Mike Rapoport
2018-09-29 10:32     ` rppt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180929103252.GC6429@rapoport-lnx \
    --to=rppt@linux.vnet.ibm.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=dgilbert@redhat.com \
    --cc=jglisse@redhat.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=peterx@redhat.com \
    --cc=shli@fb.com \
    --cc=shuah@kernel.org \
    --cc=zi.yan@cs.rutgers.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.