From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84B38C433DB for ; Tue, 9 Mar 2021 12:51:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 498BA65279 for ; Tue, 9 Mar 2021 12:51:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231245AbhCIMvH (ORCPT ); Tue, 9 Mar 2021 07:51:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231243AbhCIMuj (ORCPT ); Tue, 9 Mar 2021 07:50:39 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F95CC06174A; Tue, 9 Mar 2021 04:50:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=pZ/COBNbblJO+x07w0308qCGnfsww0L8iErIm0GU3pk=; b=OwbzsZ7Ekm7vjuMsGUDxy9IzZB aYNSzv5gJVgAdrZj4EuSbYdC1NbhsHVYWcyKD5HDML0sTHUjneU88cMrc3Usnq8uFrRYRRNCb+Hdr m/LqpRf/aHhyPRh4rbrxdW7Pct4tPO2kTbhNdtH+K69l6xJOlJj/z1ZgMXfV2ROYPp8npVcU+rXU2 Dew94FhovF/W/xp164C6bDgOeS09uC5hy43MtfnWObcKL9V0I/qfat/PfktpUvW/U88XS4brjBEcF l9YfOHt2tK9eaWDDUj7m9uUz4lbGzPnWNMCRt191+VjJdsHevacr700C9rBOS5UCEU0HdvhMZicVp j5kcRFpA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lJbo5-000aTt-T6; Tue, 09 Mar 2021 12:50:04 +0000 Date: Tue, 9 Mar 2021 12:49:49 +0000 From: Matthew Wilcox To: Alistair Popple Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org, bskeggs@redhat.com, akpm@linux-foundation.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, dri-devel@lists.freedesktop.org, jhubbard@nvidia.com, rcampbell@nvidia.com, jglisse@redhat.com Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions Message-ID: <20210309124949.GJ3479805@casper.infradead.org> References: <20210309121505.23608-1-apopple@nvidia.com> <20210309121505.23608-2-apopple@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote: > -static inline struct page *migration_entry_to_page(swp_entry_t entry) > -{ > - struct page *p = pfn_to_page(swp_offset(entry)); > - /* > - * Any use of migration entries may only occur while the > - * corresponding page is locked > - */ > - BUG_ON(!PageLocked(compound_head(p))); > - return p; > -} > +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > +{ > + struct page *p = pfn_to_page(swp_offset(entry)); > + > + /* > + * Any use of migration entries may only occur while the > + * corresponding page is locked > + */ > + BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p))); > + > + return p; > +} I appreciate you're only moving this code, but PageLocked includes an implicit compound_head(): 1. __PAGEFLAG(Locked, locked, PF_NO_TAIL) 2. #define __PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ 3. #define TESTPAGEFLAG(uname, lname, policy) \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } 4. #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ PF_POISONED_CHECK(compound_head(page)); }) 5. #define PF_POISONED_CHECK(page) ({ \ VM_BUG_ON_PGFLAGS(PagePoisoned(page), page); \ page; }) This macrology isn't easy to understand the first time you read it (nor, indeed, the tenth time), so let me decode it: Substitute 5 into 4 and remove irrelevancies: 6. #define PF_NO_TAIL(page, enforce) compound_head(page) Expand 1 in 2: 7. TESTPAGEFLAG(Locked, locked, PF_NO_TAIL) Expand 7 in 3: 8. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); } Expand 6 in 8: 9. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &compound_head(page)->flags); } (in case it's not clear, compound_head() is idempotent. that is: f(f(a)) == f(a)) From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D356C433DB for ; Tue, 9 Mar 2021 12:50:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E2BD65224 for ; Tue, 9 Mar 2021 12:50:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E2BD65224 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=nouveau-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1002C6E8D6; Tue, 9 Mar 2021 12:50:23 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4A9716E5A4; Tue, 9 Mar 2021 12:50:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=pZ/COBNbblJO+x07w0308qCGnfsww0L8iErIm0GU3pk=; b=OwbzsZ7Ekm7vjuMsGUDxy9IzZB aYNSzv5gJVgAdrZj4EuSbYdC1NbhsHVYWcyKD5HDML0sTHUjneU88cMrc3Usnq8uFrRYRRNCb+Hdr m/LqpRf/aHhyPRh4rbrxdW7Pct4tPO2kTbhNdtH+K69l6xJOlJj/z1ZgMXfV2ROYPp8npVcU+rXU2 Dew94FhovF/W/xp164C6bDgOeS09uC5hy43MtfnWObcKL9V0I/qfat/PfktpUvW/U88XS4brjBEcF l9YfOHt2tK9eaWDDUj7m9uUz4lbGzPnWNMCRt191+VjJdsHevacr700C9rBOS5UCEU0HdvhMZicVp j5kcRFpA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lJbo5-000aTt-T6; Tue, 09 Mar 2021 12:50:04 +0000 Date: Tue, 9 Mar 2021 12:49:49 +0000 From: Matthew Wilcox To: Alistair Popple Message-ID: <20210309124949.GJ3479805@casper.infradead.org> References: <20210309121505.23608-1-apopple@nvidia.com> <20210309121505.23608-2-apopple@nvidia.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com> Subject: Re: [Nouveau] [PATCH v5 1/8] mm: Remove special swap entry functions X-BeenThere: nouveau@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Nouveau development list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-mm@kvack.org, bskeggs@redhat.com, akpm@linux-foundation.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: nouveau-bounces@lists.freedesktop.org Sender: "Nouveau" On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote: > -static inline struct page *migration_entry_to_page(swp_entry_t entry) > -{ > - struct page *p = pfn_to_page(swp_offset(entry)); > - /* > - * Any use of migration entries may only occur while the > - * corresponding page is locked > - */ > - BUG_ON(!PageLocked(compound_head(p))); > - return p; > -} > +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > +{ > + struct page *p = pfn_to_page(swp_offset(entry)); > + > + /* > + * Any use of migration entries may only occur while the > + * corresponding page is locked > + */ > + BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p))); > + > + return p; > +} I appreciate you're only moving this code, but PageLocked includes an implicit compound_head(): 1. __PAGEFLAG(Locked, locked, PF_NO_TAIL) 2. #define __PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ 3. #define TESTPAGEFLAG(uname, lname, policy) \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } 4. #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ PF_POISONED_CHECK(compound_head(page)); }) 5. #define PF_POISONED_CHECK(page) ({ \ VM_BUG_ON_PGFLAGS(PagePoisoned(page), page); \ page; }) This macrology isn't easy to understand the first time you read it (nor, indeed, the tenth time), so let me decode it: Substitute 5 into 4 and remove irrelevancies: 6. #define PF_NO_TAIL(page, enforce) compound_head(page) Expand 1 in 2: 7. TESTPAGEFLAG(Locked, locked, PF_NO_TAIL) Expand 7 in 3: 8. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); } Expand 6 in 8: 9. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &compound_head(page)->flags); } (in case it's not clear, compound_head() is idempotent. that is: f(f(a)) == f(a)) _______________________________________________ Nouveau mailing list Nouveau@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/nouveau From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9661C433E0 for ; Tue, 9 Mar 2021 12:50:23 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 801246525F for ; Tue, 9 Mar 2021 12:50:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 801246525F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CE7F86E5A4; Tue, 9 Mar 2021 12:50:22 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4A9716E5A4; Tue, 9 Mar 2021 12:50:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=pZ/COBNbblJO+x07w0308qCGnfsww0L8iErIm0GU3pk=; b=OwbzsZ7Ekm7vjuMsGUDxy9IzZB aYNSzv5gJVgAdrZj4EuSbYdC1NbhsHVYWcyKD5HDML0sTHUjneU88cMrc3Usnq8uFrRYRRNCb+Hdr m/LqpRf/aHhyPRh4rbrxdW7Pct4tPO2kTbhNdtH+K69l6xJOlJj/z1ZgMXfV2ROYPp8npVcU+rXU2 Dew94FhovF/W/xp164C6bDgOeS09uC5hy43MtfnWObcKL9V0I/qfat/PfktpUvW/U88XS4brjBEcF l9YfOHt2tK9eaWDDUj7m9uUz4lbGzPnWNMCRt191+VjJdsHevacr700C9rBOS5UCEU0HdvhMZicVp j5kcRFpA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lJbo5-000aTt-T6; Tue, 09 Mar 2021 12:50:04 +0000 Date: Tue, 9 Mar 2021 12:49:49 +0000 From: Matthew Wilcox To: Alistair Popple Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions Message-ID: <20210309124949.GJ3479805@casper.infradead.org> References: <20210309121505.23608-1-apopple@nvidia.com> <20210309121505.23608-2-apopple@nvidia.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-mm@kvack.org, jglisse@redhat.com, bskeggs@redhat.com, jhubbard@nvidia.com, akpm@linux-foundation.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote: > -static inline struct page *migration_entry_to_page(swp_entry_t entry) > -{ > - struct page *p = pfn_to_page(swp_offset(entry)); > - /* > - * Any use of migration entries may only occur while the > - * corresponding page is locked > - */ > - BUG_ON(!PageLocked(compound_head(p))); > - return p; > -} > +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > +{ > + struct page *p = pfn_to_page(swp_offset(entry)); > + > + /* > + * Any use of migration entries may only occur while the > + * corresponding page is locked > + */ > + BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p))); > + > + return p; > +} I appreciate you're only moving this code, but PageLocked includes an implicit compound_head(): 1. __PAGEFLAG(Locked, locked, PF_NO_TAIL) 2. #define __PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ 3. #define TESTPAGEFLAG(uname, lname, policy) \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } 4. #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ PF_POISONED_CHECK(compound_head(page)); }) 5. #define PF_POISONED_CHECK(page) ({ \ VM_BUG_ON_PGFLAGS(PagePoisoned(page), page); \ page; }) This macrology isn't easy to understand the first time you read it (nor, indeed, the tenth time), so let me decode it: Substitute 5 into 4 and remove irrelevancies: 6. #define PF_NO_TAIL(page, enforce) compound_head(page) Expand 1 in 2: 7. TESTPAGEFLAG(Locked, locked, PF_NO_TAIL) Expand 7 in 3: 8. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); } Expand 6 in 8: 9. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &compound_head(page)->flags); } (in case it's not clear, compound_head() is idempotent. that is: f(f(a)) == f(a)) _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Date: Tue, 09 Mar 2021 12:49:49 +0000 Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions Message-Id: <20210309124949.GJ3479805@casper.infradead.org> List-Id: References: <20210309121505.23608-1-apopple@nvidia.com> <20210309121505.23608-2-apopple@nvidia.com> In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Alistair Popple Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org, bskeggs@redhat.com, akpm@linux-foundation.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, dri-devel@lists.freedesktop.org, jhubbard@nvidia.com, rcampbell@nvidia.com, jglisse@redhat.com On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote: > -static inline struct page *migration_entry_to_page(swp_entry_t entry) > -{ > - struct page *p = pfn_to_page(swp_offset(entry)); > - /* > - * Any use of migration entries may only occur while the > - * corresponding page is locked > - */ > - BUG_ON(!PageLocked(compound_head(p))); > - return p; > -} > +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > +{ > + struct page *p = pfn_to_page(swp_offset(entry)); > + > + /* > + * Any use of migration entries may only occur while the > + * corresponding page is locked > + */ > + BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p))); > + > + return p; > +} I appreciate you're only moving this code, but PageLocked includes an implicit compound_head(): 1. __PAGEFLAG(Locked, locked, PF_NO_TAIL) 2. #define __PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ 3. #define TESTPAGEFLAG(uname, lname, policy) \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } 4. #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ PF_POISONED_CHECK(compound_head(page)); }) 5. #define PF_POISONED_CHECK(page) ({ \ VM_BUG_ON_PGFLAGS(PagePoisoned(page), page); \ page; }) This macrology isn't easy to understand the first time you read it (nor, indeed, the tenth time), so let me decode it: Substitute 5 into 4 and remove irrelevancies: 6. #define PF_NO_TAIL(page, enforce) compound_head(page) Expand 1 in 2: 7. TESTPAGEFLAG(Locked, locked, PF_NO_TAIL) Expand 7 in 3: 8. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); } Expand 6 in 8: 9. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &compound_head(page)->flags); } (in case it's not clear, compound_head() is idempotent. that is: f(f(a)) = f(a))