All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Alistair Popple <apopple@nvidia.com>
Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org,
	bskeggs@redhat.com, akpm@linux-foundation.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	kvm-ppc@vger.kernel.org, dri-devel@lists.freedesktop.org,
	jhubbard@nvidia.com, rcampbell@nvidia.com, jglisse@redhat.com
Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions
Date: Tue, 9 Mar 2021 12:49:49 +0000	[thread overview]
Message-ID: <20210309124949.GJ3479805@casper.infradead.org> (raw)
In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com>

On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote:
> -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> -{
> -	struct page *p = pfn_to_page(swp_offset(entry));
> -	/*
> -	 * Any use of migration entries may only occur while the
> -	 * corresponding page is locked
> -	 */
> -	BUG_ON(!PageLocked(compound_head(p)));
> -	return p;
> -}

> +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
> +{
> +	struct page *p = pfn_to_page(swp_offset(entry));
> +
> +	/*
> +	 * Any use of migration entries may only occur while the
> +	 * corresponding page is locked
> +	 */
> +	BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p)));
> +
> +	return p;
> +}

I appreciate you're only moving this code, but PageLocked includes an
implicit compound_head():

1. __PAGEFLAG(Locked, locked, PF_NO_TAIL)

2. #define __PAGEFLAG(uname, lname, policy)                                \
        TESTPAGEFLAG(uname, lname, policy)                              \

3. #define TESTPAGEFLAG(uname, lname, policy)                              \
static __always_inline int Page##uname(struct page *page)               \
        { return test_bit(PG_##lname, &policy(page, 0)->flags); }

4. #define PF_NO_TAIL(page, enforce) ({                                    \
                VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page);     \
                PF_POISONED_CHECK(compound_head(page)); })

5. #define PF_POISONED_CHECK(page) ({                                      \
                VM_BUG_ON_PGFLAGS(PagePoisoned(page), page);            \
                page; })


This macrology isn't easy to understand the first time you read it (nor,
indeed, the tenth time), so let me decode it:

Substitute 5 into 4 and remove irrelevancies:

6. #define PF_NO_TAIL(page, enforce) compound_head(page)

Expand 1 in 2:

7.	TESTPAGEFLAG(Locked, locked, PF_NO_TAIL)

Expand 7 in 3:

8. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); }

Expand 6 in 8:

9. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &compound_head(page)->flags); }

(in case it's not clear, compound_head() is idempotent.  that is:
	f(f(a)) == f(a))

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Alistair Popple <apopple@nvidia.com>
Cc: rcampbell@nvidia.com, linux-doc@vger.kernel.org,
	nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org,
	linux-mm@kvack.org, bskeggs@redhat.com,
	akpm@linux-foundation.org
Subject: Re: [Nouveau] [PATCH v5 1/8] mm: Remove special swap entry functions
Date: Tue, 9 Mar 2021 12:49:49 +0000	[thread overview]
Message-ID: <20210309124949.GJ3479805@casper.infradead.org> (raw)
In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com>

On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote:
> -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> -{
> -	struct page *p = pfn_to_page(swp_offset(entry));
> -	/*
> -	 * Any use of migration entries may only occur while the
> -	 * corresponding page is locked
> -	 */
> -	BUG_ON(!PageLocked(compound_head(p)));
> -	return p;
> -}

> +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
> +{
> +	struct page *p = pfn_to_page(swp_offset(entry));
> +
> +	/*
> +	 * Any use of migration entries may only occur while the
> +	 * corresponding page is locked
> +	 */
> +	BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p)));
> +
> +	return p;
> +}

I appreciate you're only moving this code, but PageLocked includes an
implicit compound_head():

1. __PAGEFLAG(Locked, locked, PF_NO_TAIL)

2. #define __PAGEFLAG(uname, lname, policy)                                \
        TESTPAGEFLAG(uname, lname, policy)                              \

3. #define TESTPAGEFLAG(uname, lname, policy)                              \
static __always_inline int Page##uname(struct page *page)               \
        { return test_bit(PG_##lname, &policy(page, 0)->flags); }

4. #define PF_NO_TAIL(page, enforce) ({                                    \
                VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page);     \
                PF_POISONED_CHECK(compound_head(page)); })

5. #define PF_POISONED_CHECK(page) ({                                      \
                VM_BUG_ON_PGFLAGS(PagePoisoned(page), page);            \
                page; })


This macrology isn't easy to understand the first time you read it (nor,
indeed, the tenth time), so let me decode it:

Substitute 5 into 4 and remove irrelevancies:

6. #define PF_NO_TAIL(page, enforce) compound_head(page)

Expand 1 in 2:

7.	TESTPAGEFLAG(Locked, locked, PF_NO_TAIL)

Expand 7 in 3:

8. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); }

Expand 6 in 8:

9. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &compound_head(page)->flags); }

(in case it's not clear, compound_head() is idempotent.  that is:
	f(f(a)) == f(a))
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Alistair Popple <apopple@nvidia.com>
Cc: rcampbell@nvidia.com, linux-doc@vger.kernel.org,
	nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org,
	linux-mm@kvack.org, jglisse@redhat.com, bskeggs@redhat.com,
	jhubbard@nvidia.com, akpm@linux-foundation.org
Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions
Date: Tue, 9 Mar 2021 12:49:49 +0000	[thread overview]
Message-ID: <20210309124949.GJ3479805@casper.infradead.org> (raw)
In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com>

On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote:
> -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> -{
> -	struct page *p = pfn_to_page(swp_offset(entry));
> -	/*
> -	 * Any use of migration entries may only occur while the
> -	 * corresponding page is locked
> -	 */
> -	BUG_ON(!PageLocked(compound_head(p)));
> -	return p;
> -}

> +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
> +{
> +	struct page *p = pfn_to_page(swp_offset(entry));
> +
> +	/*
> +	 * Any use of migration entries may only occur while the
> +	 * corresponding page is locked
> +	 */
> +	BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p)));
> +
> +	return p;
> +}

I appreciate you're only moving this code, but PageLocked includes an
implicit compound_head():

1. __PAGEFLAG(Locked, locked, PF_NO_TAIL)

2. #define __PAGEFLAG(uname, lname, policy)                                \
        TESTPAGEFLAG(uname, lname, policy)                              \

3. #define TESTPAGEFLAG(uname, lname, policy)                              \
static __always_inline int Page##uname(struct page *page)               \
        { return test_bit(PG_##lname, &policy(page, 0)->flags); }

4. #define PF_NO_TAIL(page, enforce) ({                                    \
                VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page);     \
                PF_POISONED_CHECK(compound_head(page)); })

5. #define PF_POISONED_CHECK(page) ({                                      \
                VM_BUG_ON_PGFLAGS(PagePoisoned(page), page);            \
                page; })


This macrology isn't easy to understand the first time you read it (nor,
indeed, the tenth time), so let me decode it:

Substitute 5 into 4 and remove irrelevancies:

6. #define PF_NO_TAIL(page, enforce) compound_head(page)

Expand 1 in 2:

7.	TESTPAGEFLAG(Locked, locked, PF_NO_TAIL)

Expand 7 in 3:

8. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); }

Expand 6 in 8:

9. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &compound_head(page)->flags); }

(in case it's not clear, compound_head() is idempotent.  that is:
	f(f(a)) == f(a))
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Alistair Popple <apopple@nvidia.com>
Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org,
	bskeggs@redhat.com, akpm@linux-foundation.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	kvm-ppc@vger.kernel.org, dri-devel@lists.freedesktop.org,
	jhubbard@nvidia.com, rcampbell@nvidia.com, jglisse@redhat.com
Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions
Date: Tue, 09 Mar 2021 12:49:49 +0000	[thread overview]
Message-ID: <20210309124949.GJ3479805@casper.infradead.org> (raw)
In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com>

On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote:
> -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> -{
> -	struct page *p = pfn_to_page(swp_offset(entry));
> -	/*
> -	 * Any use of migration entries may only occur while the
> -	 * corresponding page is locked
> -	 */
> -	BUG_ON(!PageLocked(compound_head(p)));
> -	return p;
> -}

> +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
> +{
> +	struct page *p = pfn_to_page(swp_offset(entry));
> +
> +	/*
> +	 * Any use of migration entries may only occur while the
> +	 * corresponding page is locked
> +	 */
> +	BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p)));
> +
> +	return p;
> +}

I appreciate you're only moving this code, but PageLocked includes an
implicit compound_head():

1. __PAGEFLAG(Locked, locked, PF_NO_TAIL)

2. #define __PAGEFLAG(uname, lname, policy)                                \
        TESTPAGEFLAG(uname, lname, policy)                              \

3. #define TESTPAGEFLAG(uname, lname, policy)                              \
static __always_inline int Page##uname(struct page *page)               \
        { return test_bit(PG_##lname, &policy(page, 0)->flags); }

4. #define PF_NO_TAIL(page, enforce) ({                                    \
                VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page);     \
                PF_POISONED_CHECK(compound_head(page)); })

5. #define PF_POISONED_CHECK(page) ({                                      \
                VM_BUG_ON_PGFLAGS(PagePoisoned(page), page);            \
                page; })


This macrology isn't easy to understand the first time you read it (nor,
indeed, the tenth time), so let me decode it:

Substitute 5 into 4 and remove irrelevancies:

6. #define PF_NO_TAIL(page, enforce) compound_head(page)

Expand 1 in 2:

7.	TESTPAGEFLAG(Locked, locked, PF_NO_TAIL)

Expand 7 in 3:

8. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); }

Expand 6 in 8:

9. static __always_inline int PageLocked(struct page *page)
	{ return test_bit(PG_locked, &compound_head(page)->flags); }

(in case it's not clear, compound_head() is idempotent.  that is:
	f(f(a)) = f(a))

  reply	other threads:[~2021-03-09 12:51 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09 12:14 [PATCH v5 0/8] Add support for SVM atomics in Nouveau Alistair Popple
2021-03-09 12:14 ` Alistair Popple
2021-03-09 12:14 ` Alistair Popple
2021-03-09 12:14 ` [Nouveau] " Alistair Popple
2021-03-09 12:14 ` [PATCH v5 1/8] mm: Remove special swap entry functions Alistair Popple
2021-03-09 12:14   ` Alistair Popple
2021-03-09 12:14   ` Alistair Popple
2021-03-09 12:14   ` [Nouveau] " Alistair Popple
2021-03-09 12:49   ` Matthew Wilcox [this message]
2021-03-09 12:49     ` Matthew Wilcox
2021-03-09 12:49     ` Matthew Wilcox
2021-03-09 12:49     ` [Nouveau] " Matthew Wilcox
2021-03-12  4:42     ` Alistair Popple
2021-03-12  4:42       ` Alistair Popple
2021-03-12  4:42       ` Alistair Popple
2021-03-12  4:42       ` [Nouveau] " Alistair Popple
2021-03-10  2:59   ` kernel test robot
2021-03-10  2:59     ` kernel test robot
2021-03-10  2:59     ` kernel test robot
2021-03-10  2:59     ` kernel test robot
2021-03-10  2:59     ` [Nouveau] " kernel test robot
2021-03-09 12:14 ` [PATCH v5 2/8] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-03-09 12:14   ` Alistair Popple
2021-03-09 12:14   ` Alistair Popple
2021-03-09 12:14   ` [Nouveau] " Alistair Popple
2021-03-09 12:15 ` [PATCH v5 3/8] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` [Nouveau] " Alistair Popple
2021-03-09 12:15 ` [PATCH v5 4/8] mm/rmap: Split migration into its own function Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` [Nouveau] " Alistair Popple
2021-03-09 12:15 ` [PATCH v5 5/8] mm: Device exclusive memory access Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` [Nouveau] " Alistair Popple
2021-03-10  3:51   ` kernel test robot
2021-03-10  3:51     ` kernel test robot
2021-03-10  3:51     ` kernel test robot
2021-03-10  3:51     ` [Nouveau] " kernel test robot
2021-03-09 12:15 ` [PATCH v5 6/8] mm: Selftests for exclusive device memory Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` Alistair Popple
2021-03-09 12:15   ` [Nouveau] " Alistair Popple
2021-03-09 22:38 ` [PATCH v5 7/8] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-03-09 22:38   ` Alistair Popple
2021-03-09 22:38   ` Alistair Popple
2021-03-09 22:38   ` [Nouveau] " Alistair Popple
2021-03-09 22:38 ` [PATCH v5 8/8] nouveau/svm: Implement atomic SVM access Alistair Popple
2021-03-09 22:38   ` Alistair Popple
2021-03-09 22:38   ` Alistair Popple
2021-03-09 22:38   ` [Nouveau] " Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210309124949.GJ3479805@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.